Welcome to Smoke detection

Author: Abhishek Ghosh, Moumita Mukherjee

In this notebook, we implement The TensorFlow 2 Object Detection Library for training on our own dataset.

Our goal is to develop a smoke detector which will help California respond wildfire concerns as quickly as possible.

We will use 733 annotated smoke images. The training, validation and testing dataset is divided in the ratio 7:2:1 i.e. 513 images for training, 147 for validation and 73 images for testing. I would like to acknowledge HPWREN and AIformankind for providing this dataset.

We have used Roboflow AI which is used to label the data, apply image preprocessing, data augmentation, generate TF Records and many other useful techniques in machine learning. We will go through each step sequentially. Please follow along and avoid skipping any step.

We will take the following steps to a implement Tensorflow 2 object detection model on our smoke dataset-

  • Install TensorFlow2 Object Detection Dependencies
  • Download Smoke Images Dataset and necessary files
  • Write your own TensorFlow2 Object Detection Training Configuration
  • Train Custom TensorFlow2 Object Detection Model
  • Export Custom TensorFlow2 Object Detection Weights
  • Use Trained TensorFlow2 Object Detector For Inference on Test Images
  • Save your model for future applications

Note: Feel free to use your own dataset after you walkthrough and understand this notebook!

When we reach the end of this notebook, we will have developed a smoke detector using deep learning in Tensorflow 2.2.

wildfire1.jpg

Install TensorFlow2 Object Detection Dependencies

In [ ]:
#we will utilize the GPU in this tutorial. 
#TPU configuration is recommended for faster training on larger datsets
!pip install -U --pre tensorflow=="2.2.0"
Collecting tensorflow==2.2.0
  Downloading https://files.pythonhosted.org/packages/3d/be/679ce5254a8c8d07470efb4a4c00345fae91f766e64f1c2aece8796d7218/tensorflow-2.2.0-cp36-cp36m-manylinux2010_x86_64.whl (516.2MB)
     |████████████████████████████████| 516.2MB 30kB/s 
Requirement already satisfied, skipping upgrade: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (0.3.3)
Requirement already satisfied, skipping upgrade: six>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (1.15.0)
Collecting tensorflow-estimator<2.3.0,>=2.2.0
  Downloading https://files.pythonhosted.org/packages/a4/f5/926ae53d6a226ec0fda5208e0e581cffed895ccc89e36ba76a8e60895b78/tensorflow_estimator-2.2.0-py2.py3-none-any.whl (454kB)
     |████████████████████████████████| 460kB 43.7MB/s 
Collecting tensorboard<2.3.0,>=2.2.0
  Downloading https://files.pythonhosted.org/packages/1d/74/0a6fcb206dcc72a6da9a62dd81784bfdbff5fedb099982861dc2219014fb/tensorboard-2.2.2-py3-none-any.whl (3.0MB)
     |████████████████████████████████| 3.0MB 53.9MB/s 
Requirement already satisfied, skipping upgrade: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (1.18.5)
Requirement already satisfied, skipping upgrade: scipy==1.4.1; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (1.4.1)
Requirement already satisfied, skipping upgrade: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (0.9.0)
Requirement already satisfied, skipping upgrade: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (2.10.0)
Requirement already satisfied, skipping upgrade: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (1.1.0)
Requirement already satisfied, skipping upgrade: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (1.31.0)
Requirement already satisfied, skipping upgrade: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (1.6.3)
Requirement already satisfied, skipping upgrade: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (3.12.4)
Requirement already satisfied, skipping upgrade: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (1.1.2)
Requirement already satisfied, skipping upgrade: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (1.12.1)
Requirement already satisfied, skipping upgrade: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (0.34.2)
Requirement already satisfied, skipping upgrade: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (3.3.0)
Requirement already satisfied, skipping upgrade: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow==2.2.0) (0.2.0)
Requirement already satisfied, skipping upgrade: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (2.23.0)
Requirement already satisfied, skipping upgrade: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.17.2)
Requirement already satisfied, skipping upgrade: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.7.0)
Requirement already satisfied, skipping upgrade: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.0.1)
Requirement already satisfied, skipping upgrade: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (3.2.2)
Requirement already satisfied, skipping upgrade: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (49.2.0)
Requirement already satisfied, skipping upgrade: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (0.4.1)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (2.10)
Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (3.0.4)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.24.3)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (2020.6.20)
Requirement already satisfied, skipping upgrade: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (0.2.8)
Requirement already satisfied, skipping upgrade: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (4.6)
Requirement already satisfied, skipping upgrade: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (4.1.1)
Requirement already satisfied, skipping upgrade: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.7.0)
Requirement already satisfied, skipping upgrade: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (1.3.0)
Requirement already satisfied, skipping upgrade: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (0.4.8)
Requirement already satisfied, skipping upgrade: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (3.1.0)
Requirement already satisfied, skipping upgrade: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow==2.2.0) (3.1.0)
Installing collected packages: tensorflow-estimator, tensorboard, tensorflow
  Found existing installation: tensorflow-estimator 2.3.0
    Uninstalling tensorflow-estimator-2.3.0:
      Successfully uninstalled tensorflow-estimator-2.3.0
  Found existing installation: tensorboard 2.3.0
    Uninstalling tensorboard-2.3.0:
      Successfully uninstalled tensorboard-2.3.0
  Found existing installation: tensorflow 2.3.0
    Uninstalling tensorflow-2.3.0:
      Successfully uninstalled tensorflow-2.3.0
Successfully installed tensorboard-2.2.2 tensorflow-2.2.0 tensorflow-estimator-2.2.0
In [ ]:
import os
import pathlib

# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
  while "models" in pathlib.Path.cwd().parts:
    os.chdir('..')
elif not pathlib.Path('models').exists():
  !git clone --depth 1 https://github.com/tensorflow/models
Cloning into 'models'...
remote: Enumerating objects: 1909, done.
remote: Counting objects: 100% (1909/1909), done.
remote: Compressing objects: 100% (1661/1661), done.
remote: Total 1909 (delta 440), reused 799 (delta 230), pack-reused 0
Receiving objects: 100% (1909/1909), 51.32 MiB | 36.54 MiB/s, done.
Resolving deltas: 100% (440/440), done.
In [ ]:
# Install the Object Detection API
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
Processing ./models/research
Requirement already satisfied: tensorflow-metadata<0.23.0 in /usr/local/lib/python3.6/dist-packages (from object-detection==0.1) (0.22.2)
Collecting avro-python3==1.8.1
  Downloading https://files.pythonhosted.org/packages/7d/7a/90ff9b8013e21942009380e7b86cf19d3dc83adb7042b735f016ca7e2b68/avro-python3-1.8.1.tar.gz
Collecting apache-beam
  Downloading https://files.pythonhosted.org/packages/56/f1/7fcfbff3d3eed7895f10b358844b6e8ed21b230666aabd09d842cd725363/apache_beam-2.23.0-cp36-cp36m-manylinux2010_x86_64.whl (8.3MB)
Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from object-detection==0.1) (7.0.0)
Requirement already satisfied: lxml in /usr/local/lib/python3.6/dist-packages (from object-detection==0.1) (4.2.6)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from object-detection==0.1) (3.2.2)
Requirement already satisfied: Cython in /usr/local/lib/python3.6/dist-packages (from object-detection==0.1) (0.29.21)
Requirement already satisfied: contextlib2 in /usr/local/lib/python3.6/dist-packages (from object-detection==0.1) (0.5.5)
Collecting tf-slim
  Downloading https://files.pythonhosted.org/packages/02/97/b0f4a64df018ca018cc035d44f2ef08f91e2e8aa67271f6f19633a015ff7/tf_slim-1.1.0-py2.py3-none-any.whl (352kB)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from object-detection==0.1) (1.15.0)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.6/dist-packages (from object-detection==0.1) (2.0.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from object-detection==0.1) (1.4.1)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from object-detection==0.1) (1.0.5)
Collecting tf-models-official==2.2.2
  Downloading https://files.pythonhosted.org/packages/99/8e/6db83bab2f86475fa69289848379f642746314131527d8a4ced47a6396af/tf_models_official-2.2.2-py2.py3-none-any.whl (711kB)
Requirement already satisfied: googleapis-common-protos in /usr/local/lib/python3.6/dist-packages (from tensorflow-metadata<0.23.0->object-detection==0.1) (1.52.0)
Requirement already satisfied: protobuf<4,>=3.7 in /usr/local/lib/python3.6/dist-packages (from tensorflow-metadata<0.23.0->object-detection==0.1) (3.12.4)
Collecting hdfs<3.0.0,>=2.1.0
  Downloading https://files.pythonhosted.org/packages/82/39/2c0879b1bcfd1f6ad078eb210d09dbce21072386a3997074ee91e60ddc5a/hdfs-2.5.8.tar.gz (41kB)
Collecting future<1.0.0,>=0.18.2
  Downloading https://files.pythonhosted.org/packages/45/0b/38b06fd9b92dc2b68d58b75f900e97884c45bedd2ff83203d933cf5851c9/future-0.18.2.tar.gz (829kB)
Collecting pyarrow<0.18.0,>=0.15.1; python_version >= "3.0" or platform_system != "Windows"
  Downloading https://files.pythonhosted.org/packages/ba/3f/6cac1714fff444664603f92cb9fbe91c7ae25375880158b9e9691c4584c8/pyarrow-0.17.1-cp36-cp36m-manylinux2014_x86_64.whl (63.8MB)
Requirement already satisfied: pydot<2,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from apache-beam->object-detection==0.1) (1.3.0)
Requirement already satisfied: typing-extensions<3.8.0,>=3.7.0 in /usr/local/lib/python3.6/dist-packages (from apache-beam->object-detection==0.1) (3.7.4.2)
Collecting mock<3.0.0,>=1.0.1
  Downloading https://files.pythonhosted.org/packages/e6/35/f187bdf23be87092bd0f1200d43d23076cee4d0dec109f195173fd3ebc79/mock-2.0.0-py2.py3-none-any.whl (56kB)
Requirement already satisfied: pytz>=2018.3 in /usr/local/lib/python3.6/dist-packages (from apache-beam->object-detection==0.1) (2018.9)
Collecting dill<0.3.2,>=0.3.1.1
  Downloading https://files.pythonhosted.org/packages/c7/11/345f3173809cea7f1a193bfbf02403fff250a3360e0e118a1630985e547d/dill-0.3.1.1.tar.gz (151kB)
Collecting oauth2client<4,>=2.0.1
  Downloading https://files.pythonhosted.org/packages/c0/7b/bc893e35d6ca46a72faa4b9eaac25c687ce60e1fbe978993fe2de1b0ff0d/oauth2client-3.0.0.tar.gz (77kB)
Requirement already satisfied: crcmod<2.0,>=1.7 in /usr/local/lib/python3.6/dist-packages (from apache-beam->object-detection==0.1) (1.7)
Requirement already satisfied: numpy<2,>=1.14.3 in /usr/local/lib/python3.6/dist-packages (from apache-beam->object-detection==0.1) (1.18.5)
Requirement already satisfied: httplib2<0.18.0,>=0.8 in /usr/local/lib/python3.6/dist-packages (from apache-beam->object-detection==0.1) (0.17.4)
Requirement already satisfied: pymongo<4.0.0,>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from apache-beam->object-detection==0.1) (3.11.0)
Collecting fastavro<0.24,>=0.21.4
  Downloading https://files.pythonhosted.org/packages/98/8e/1d62398df5569a805d956bd96df1b2c06f973e8d3f1f7489adf9c58b2824/fastavro-0.23.6-cp36-cp36m-manylinux2010_x86_64.whl (1.4MB)
Requirement already satisfied: grpcio<2,>=1.29.0 in /usr/local/lib/python3.6/dist-packages (from apache-beam->object-detection==0.1) (1.31.0)
Requirement already satisfied: python-dateutil<3,>=2.8.0 in /usr/local/lib/python3.6/dist-packages (from apache-beam->object-detection==0.1) (2.8.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->object-detection==0.1) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->object-detection==0.1) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->object-detection==0.1) (1.2.0)
Requirement already satisfied: absl-py>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from tf-slim->object-detection==0.1) (0.9.0)
Requirement already satisfied: setuptools>=18.0 in /usr/local/lib/python3.6/dist-packages (from pycocotools->object-detection==0.1) (49.2.0)
Collecting py-cpuinfo>=3.3.0
  Downloading https://files.pythonhosted.org/packages/f6/f5/8e6e85ce2e9f6e05040cf0d4e26f43a4718bcc4bce988b433276d4b1a5c1/py-cpuinfo-7.0.0.tar.gz (95kB)
Requirement already satisfied: tensorflow-hub>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (0.8.0)
Collecting mlperf-compliance==0.0.10
  Downloading https://files.pythonhosted.org/packages/f4/08/f2febd8cbd5c9371f7dab311e90400d83238447ba7609b3bf0145b4cb2a2/mlperf_compliance-0.0.10-py3-none-any.whl
Requirement already satisfied: kaggle>=1.3.9 in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (1.5.6)
Requirement already satisfied: gin-config in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (0.3.0)
Collecting sentencepiece
  Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)
Collecting opencv-python-headless
  Downloading https://files.pythonhosted.org/packages/35/7b/628da8b9f91342432a9432d900d5e2771c387969430e7d4a513dc6dd7804/opencv_python_headless-4.4.0.40-cp36-cp36m-manylinux2014_x86_64.whl (36.6MB)
Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (5.4.8)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (3.13)
Requirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (1.7.12)
Requirement already satisfied: tensorflow-addons in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (0.8.3)
Collecting tensorflow-model-optimization>=0.2.1
  Downloading https://files.pythonhosted.org/packages/1a/cc/4b0831f492396f03a4563cc749ad94cbf7af986aaa7a7d89e5979029ce5c/tensorflow_model_optimization-0.4.1-py2.py3-none-any.whl (172kB)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (0.7)
Requirement already satisfied: tensorflow>=2.2.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (2.2.0)
Collecting typing==3.7.4.1
  Downloading https://files.pythonhosted.org/packages/fe/2e/b480ee1b75e6d17d2993738670e75c1feeb9ff7f64452153cf018051cc92/typing-3.7.4.1-py3-none-any.whl
Requirement already satisfied: google-cloud-bigquery>=0.31.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (1.21.0)
Requirement already satisfied: tensorflow-datasets in /usr/local/lib/python3.6/dist-packages (from tf-models-official==2.2.2->object-detection==0.1) (2.1.0)
Requirement already satisfied: docopt in /usr/local/lib/python3.6/dist-packages (from hdfs<3.0.0,>=2.1.0->apache-beam->object-detection==0.1) (0.6.2)
Requirement already satisfied: requests>=2.7.0 in /usr/local/lib/python3.6/dist-packages (from hdfs<3.0.0,>=2.1.0->apache-beam->object-detection==0.1) (2.23.0)
Collecting pbr>=0.11
  Downloading https://files.pythonhosted.org/packages/96/ba/aa953a11ec014b23df057ecdbc922fdb40ca8463466b1193f3367d2711a6/pbr-5.4.5-py2.py3-none-any.whl (110kB)
Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client<4,>=2.0.1->apache-beam->object-detection==0.1) (0.4.8)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client<4,>=2.0.1->apache-beam->object-detection==0.1) (0.2.8)
Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client<4,>=2.0.1->apache-beam->object-detection==0.1) (4.6)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-official==2.2.2->object-detection==0.1) (4.41.1)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-official==2.2.2->object-detection==0.1) (4.0.1)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-official==2.2.2->object-detection==0.1) (1.24.3)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-official==2.2.2->object-detection==0.1) (2020.6.20)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-official==2.2.2->object-detection==0.1) (3.0.1)
Requirement already satisfied: google-auth>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-official==2.2.2->object-detection==0.1) (1.17.2)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-official==2.2.2->object-detection==0.1) (0.0.4)
Requirement already satisfied: typeguard in /usr/local/lib/python3.6/dist-packages (from tensorflow-addons->tf-models-official==2.2.2->object-detection==0.1) (2.7.1)
Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-model-optimization>=0.2.1->tf-models-official==2.2.2->object-detection==0.1) (0.1.5)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (0.34.2)
Requirement already satisfied: tensorflow-estimator<2.3.0,>=2.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (2.2.0)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (0.3.3)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (1.12.1)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (2.10.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (3.3.0)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (1.6.3)
Requirement already satisfied: tensorboard<2.3.0,>=2.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (2.2.2)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (1.1.0)
Requirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (1.1.2)
Requirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (0.2.0)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.0.3 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-official==2.2.2->object-detection==0.1) (1.0.3)
Requirement already satisfied: google-resumable-media!=0.4.0,<0.5.0dev,>=0.3.1 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-official==2.2.2->object-detection==0.1) (0.4.1)
Requirement already satisfied: promise in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets->tf-models-official==2.2.2->object-detection==0.1) (2.3)
Requirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets->tf-models-official==2.2.2->object-detection==0.1) (19.3.0)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam->object-detection==0.1) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam->object-detection==0.1) (3.0.4)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.6/dist-packages (from python-slugify->kaggle>=1.3.9->tf-models-official==2.2.2->object-detection==0.1) (1.3)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.4.1->google-api-python-client>=1.6.7->tf-models-official==2.2.2->object-detection==0.1) (4.1.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (3.2.2)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (0.4.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (1.7.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (1.0.1)
Requirement already satisfied: google-api-core<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-core<2.0dev,>=1.0.3->google-cloud-bigquery>=0.31.0->tf-models-official==2.2.2->object-detection==0.1) (1.16.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (1.7.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (1.3.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (3.1.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow>=2.2.0->tf-models-official==2.2.2->object-detection==0.1) (3.1.0)
Building wheels for collected packages: object-detection, avro-python3, hdfs, future, dill, oauth2client, py-cpuinfo
  Building wheel for object-detection (setup.py): started
  Building wheel for object-detection (setup.py): finished with status 'done'
  Created wheel for object-detection: filename=object_detection-0.1-cp36-none-any.whl size=1550577 sha256=a1f1d3554674720a21af96b6acef5b05e25ae3b2e1243b94072797b7ff8c5e7f
  Stored in directory: /tmp/pip-ephem-wheel-cache-89e54g5h/wheels/94/49/4b/39b051683087a22ef7e80ec52152a27249d1a644ccf4e442ea
  Building wheel for avro-python3 (setup.py): started
  Building wheel for avro-python3 (setup.py): finished with status 'done'
  Created wheel for avro-python3: filename=avro_python3-1.8.1-cp36-none-any.whl size=43164 sha256=9793f1e2cbd2a9fd0e5fcd7ad26a450d823e20a92bb09350556578e62690c5c8
  Stored in directory: /root/.cache/pip/wheels/5c/04/3c/ffe3561c960133e747de503dea3e3facef2dea533bc92cb21a
  Building wheel for hdfs (setup.py): started
  Building wheel for hdfs (setup.py): finished with status 'done'
  Created wheel for hdfs: filename=hdfs-2.5.8-cp36-none-any.whl size=33213 sha256=4c6343875d68b52f8b62730354e2a4d1fc15835d5a94d73100cfbadeb98be2ca
  Stored in directory: /root/.cache/pip/wheels/fe/a7/05/23e3699975fc20f8a30e00ac1e515ab8c61168e982abe4ce70
  Building wheel for future (setup.py): started
  Building wheel for future (setup.py): finished with status 'done'
  Created wheel for future: filename=future-0.18.2-cp36-none-any.whl size=491057 sha256=e0a579cd2cedb6fe31c1b6ed76a76f020de42babe3077b2c164e975314328dde
  Stored in directory: /root/.cache/pip/wheels/8b/99/a0/81daf51dcd359a9377b110a8a886b3895921802d2fc1b2397e
  Building wheel for dill (setup.py): started
  Building wheel for dill (setup.py): finished with status 'done'
  Created wheel for dill: filename=dill-0.3.1.1-cp36-none-any.whl size=78532 sha256=3f1d00d11b72c9d497c59bd268eaa5cc1bd9a5d7d1f5cb85c12020cb8fc554e3
  Stored in directory: /root/.cache/pip/wheels/59/b1/91/f02e76c732915c4015ab4010f3015469866c1eb9b14058d8e7
  Building wheel for oauth2client (setup.py): started
  Building wheel for oauth2client (setup.py): finished with status 'done'
  Created wheel for oauth2client: filename=oauth2client-3.0.0-cp36-none-any.whl size=106382 sha256=eb344cbf233262e32085349ae21fe6f5a7b15561ba24a3907cff4b04dd6ad011
  Stored in directory: /root/.cache/pip/wheels/48/f7/87/b932f09c6335dbcf45d916937105a372ab14f353a9ca431d7d
  Building wheel for py-cpuinfo (setup.py): started
  Building wheel for py-cpuinfo (setup.py): finished with status 'done'
  Created wheel for py-cpuinfo: filename=py_cpuinfo-7.0.0-cp36-none-any.whl size=20069 sha256=200b884e503a7f9b34c06970ba98c0c933e0524fbcf4dd318c1d72c708055d13
  Stored in directory: /root/.cache/pip/wheels/f1/93/7b/127daf0c3a5a49feb2fecd468d508067c733fba5192f726ad1
Successfully built object-detection avro-python3 hdfs future dill oauth2client py-cpuinfo
Installing collected packages: avro-python3, hdfs, future, pyarrow, pbr, mock, dill, oauth2client, fastavro, apache-beam, tf-slim, py-cpuinfo, mlperf-compliance, sentencepiece, opencv-python-headless, tensorflow-model-optimization, typing, tf-models-official, object-detection
  Found existing installation: future 0.16.0
    Uninstalling future-0.16.0:
      Successfully uninstalled future-0.16.0
  Found existing installation: pyarrow 0.14.1
    Uninstalling pyarrow-0.14.1:
      Successfully uninstalled pyarrow-0.14.1
  Found existing installation: dill 0.3.2
    Uninstalling dill-0.3.2:
      Successfully uninstalled dill-0.3.2
  Found existing installation: oauth2client 4.1.3
    Uninstalling oauth2client-4.1.3:
      Successfully uninstalled oauth2client-4.1.3
Successfully installed apache-beam-2.23.0 avro-python3-1.8.1 dill-0.3.1.1 fastavro-0.23.6 future-0.18.2 hdfs-2.5.8 mlperf-compliance-0.0.10 mock-2.0.0 oauth2client-3.0.0 object-detection-0.1 opencv-python-headless-4.4.0.40 pbr-5.4.5 py-cpuinfo-7.0.0 pyarrow-0.17.1 sentencepiece-0.1.91 tensorflow-model-optimization-0.4.1 tf-models-official-2.2.2 tf-slim-1.1.0 typing-3.7.4.1
ERROR: pydrive 1.3.1 has requirement oauth2client>=4.0.0, but you'll have oauth2client 3.0.0 which is incompatible.
ERROR: multiprocess 0.70.10 has requirement dill>=0.3.2, but you'll have dill 0.3.1.1 which is incompatible.

Perform Necessary Imports

In [ ]:
import matplotlib
import matplotlib.pyplot as plt

import os
import random
import io
import imageio
import glob
import scipy.misc
import numpy as np
from six import BytesIO
from PIL import Image, ImageDraw, ImageFont
from IPython.display import display, Javascript
from IPython.display import Image as IPyImage

import tensorflow as tf

from object_detection.utils import label_map_util
from object_detection.utils import config_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.utils import colab_utils
from object_detection.builders import model_builder

%matplotlib inline

Verify to check Object Detection API and Perform Model Builder Test

In [ ]:
#run model builder test
!python ./models/research/object_detection/builders/model_builder_tf2_test.py
Running tests under Python 3.6.9: /usr/bin/python3
[ RUN      ] ModelBuilderTF2Test.test_create_center_net_model
2020-08-13 22:04:47.082878: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-08-13 22:04:47.141833: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:04:47.142413: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.73GiB deviceMemoryBandwidth: 298.08GiB/s
2020-08-13 22:04:47.142735: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-08-13 22:04:47.400563: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-08-13 22:04:47.524410: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-08-13 22:04:47.558288: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-08-13 22:04:47.823237: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-08-13 22:04:47.873260: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-08-13 22:04:48.386188: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-08-13 22:04:48.386395: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:04:48.387119: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:04:48.387672: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-08-13 22:04:48.388043: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-08-13 22:04:48.393270: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2200000000 Hz
2020-08-13 22:04:48.393455: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2664d80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-08-13 22:04:48.393484: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-08-13 22:04:48.535509: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:04:48.536194: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2664bc0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-08-13 22:04:48.536222: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Tesla T4, Compute Capability 7.5
2020-08-13 22:04:48.538427: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:04:48.539064: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.73GiB deviceMemoryBandwidth: 298.08GiB/s
2020-08-13 22:04:48.539135: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-08-13 22:04:48.539178: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-08-13 22:04:48.539206: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-08-13 22:04:48.539233: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-08-13 22:04:48.539258: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-08-13 22:04:48.539282: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-08-13 22:04:48.539305: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-08-13 22:04:48.539407: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:04:48.540317: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:04:48.541150: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-08-13 22:04:48.541241: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-08-13 22:04:48.546551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-08-13 22:04:48.546584: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0 
2020-08-13 22:04:48.546596: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N 
2020-08-13 22:04:48.546707: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:04:48.547300: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:04:48.547854: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2020-08-13 22:04:48.547895: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14071 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)
[       OK ] ModelBuilderTF2Test.test_create_center_net_model
[ RUN      ] ModelBuilderTF2Test.test_create_experimental_model
[       OK ] ModelBuilderTF2Test.test_create_experimental_model
[ RUN      ] ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature(True)
[       OK ] ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature(True)
[ RUN      ] ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature(False)
[       OK ] ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature(False)
[ RUN      ] ModelBuilderTF2Test.test_create_faster_rcnn_model_from_config_with_example_miner
[       OK ] ModelBuilderTF2Test.test_create_faster_rcnn_model_from_config_with_example_miner
[ RUN      ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_with_matmul
[       OK ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_with_matmul
[ RUN      ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_without_matmul
[       OK ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_without_matmul
[ RUN      ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_with_matmul
[       OK ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_with_matmul
[ RUN      ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_without_matmul
[       OK ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_without_matmul
[ RUN      ] ModelBuilderTF2Test.test_create_rfcn_model_from_config
[       OK ] ModelBuilderTF2Test.test_create_rfcn_model_from_config
[ RUN      ] ModelBuilderTF2Test.test_create_ssd_fpn_model_from_config
[       OK ] ModelBuilderTF2Test.test_create_ssd_fpn_model_from_config
[ RUN      ] ModelBuilderTF2Test.test_create_ssd_models_from_config
I0813 22:04:57.431106 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet EfficientNet backbone version: efficientnet-b0
I0813 22:04:57.431298 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:145] EfficientDet BiFPN num filters: 64
I0813 22:04:57.431377 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:147] EfficientDet BiFPN num iterations: 3
I0813 22:04:57.439035 139872938780544 efficientnet_model.py:146] round_filter input=32 output=32
I0813 22:04:57.478844 139872938780544 efficientnet_model.py:146] round_filter input=32 output=32
I0813 22:04:57.478965 139872938780544 efficientnet_model.py:146] round_filter input=16 output=16
I0813 22:04:57.585925 139872938780544 efficientnet_model.py:146] round_filter input=16 output=16
I0813 22:04:57.586045 139872938780544 efficientnet_model.py:146] round_filter input=24 output=24
I0813 22:04:57.887572 139872938780544 efficientnet_model.py:146] round_filter input=24 output=24
I0813 22:04:57.887784 139872938780544 efficientnet_model.py:146] round_filter input=40 output=40
I0813 22:04:58.187594 139872938780544 efficientnet_model.py:146] round_filter input=40 output=40
I0813 22:04:58.187778 139872938780544 efficientnet_model.py:146] round_filter input=80 output=80
I0813 22:04:58.654487 139872938780544 efficientnet_model.py:146] round_filter input=80 output=80
I0813 22:04:58.654651 139872938780544 efficientnet_model.py:146] round_filter input=112 output=112
I0813 22:04:59.101703 139872938780544 efficientnet_model.py:146] round_filter input=112 output=112
I0813 22:04:59.101892 139872938780544 efficientnet_model.py:146] round_filter input=192 output=192
I0813 22:04:59.861669 139872938780544 efficientnet_model.py:146] round_filter input=192 output=192
I0813 22:04:59.861849 139872938780544 efficientnet_model.py:146] round_filter input=320 output=320
I0813 22:05:00.004044 139872938780544 efficientnet_model.py:146] round_filter input=1280 output=1280
I0813 22:05:00.060943 139872938780544 efficientnet_model.py:459] Building model efficientnet with params ModelConfig(width_coefficient=1.0, depth_coefficient=1.0, resolution=224, dropout_rate=0.2, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
I0813 22:05:00.146958 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet EfficientNet backbone version: efficientnet-b1
I0813 22:05:00.147107 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:145] EfficientDet BiFPN num filters: 88
I0813 22:05:00.147182 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:147] EfficientDet BiFPN num iterations: 4
I0813 22:05:00.153569 139872938780544 efficientnet_model.py:146] round_filter input=32 output=32
I0813 22:05:00.188714 139872938780544 efficientnet_model.py:146] round_filter input=32 output=32
I0813 22:05:00.188860 139872938780544 efficientnet_model.py:146] round_filter input=16 output=16
I0813 22:05:00.422790 139872938780544 efficientnet_model.py:146] round_filter input=16 output=16
I0813 22:05:00.422944 139872938780544 efficientnet_model.py:146] round_filter input=24 output=24
I0813 22:05:00.872582 139872938780544 efficientnet_model.py:146] round_filter input=24 output=24
I0813 22:05:00.872744 139872938780544 efficientnet_model.py:146] round_filter input=40 output=40
I0813 22:05:01.345237 139872938780544 efficientnet_model.py:146] round_filter input=40 output=40
I0813 22:05:01.345404 139872938780544 efficientnet_model.py:146] round_filter input=80 output=80
I0813 22:05:01.954821 139872938780544 efficientnet_model.py:146] round_filter input=80 output=80
I0813 22:05:01.954982 139872938780544 efficientnet_model.py:146] round_filter input=112 output=112
I0813 22:05:02.577638 139872938780544 efficientnet_model.py:146] round_filter input=112 output=112
I0813 22:05:02.577836 139872938780544 efficientnet_model.py:146] round_filter input=192 output=192
I0813 22:05:03.382386 139872938780544 efficientnet_model.py:146] round_filter input=192 output=192
I0813 22:05:03.382552 139872938780544 efficientnet_model.py:146] round_filter input=320 output=320
I0813 22:05:03.698100 139872938780544 efficientnet_model.py:146] round_filter input=1280 output=1280
I0813 22:05:03.756214 139872938780544 efficientnet_model.py:459] Building model efficientnet with params ModelConfig(width_coefficient=1.0, depth_coefficient=1.1, resolution=240, dropout_rate=0.2, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
I0813 22:05:04.045968 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet EfficientNet backbone version: efficientnet-b2
I0813 22:05:04.046133 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:145] EfficientDet BiFPN num filters: 112
I0813 22:05:04.046210 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:147] EfficientDet BiFPN num iterations: 5
I0813 22:05:04.052558 139872938780544 efficientnet_model.py:146] round_filter input=32 output=32
I0813 22:05:04.089972 139872938780544 efficientnet_model.py:146] round_filter input=32 output=32
I0813 22:05:04.090108 139872938780544 efficientnet_model.py:146] round_filter input=16 output=16
I0813 22:05:04.353937 139872938780544 efficientnet_model.py:146] round_filter input=16 output=16
I0813 22:05:04.354099 139872938780544 efficientnet_model.py:146] round_filter input=24 output=24
I0813 22:05:04.812096 139872938780544 efficientnet_model.py:146] round_filter input=24 output=24
I0813 22:05:04.812254 139872938780544 efficientnet_model.py:146] round_filter input=40 output=48
I0813 22:05:05.273249 139872938780544 efficientnet_model.py:146] round_filter input=40 output=48
I0813 22:05:05.273425 139872938780544 efficientnet_model.py:146] round_filter input=80 output=88
I0813 22:05:05.900319 139872938780544 efficientnet_model.py:146] round_filter input=80 output=88
I0813 22:05:05.900481 139872938780544 efficientnet_model.py:146] round_filter input=112 output=120
I0813 22:05:06.526848 139872938780544 efficientnet_model.py:146] round_filter input=112 output=120
I0813 22:05:06.527019 139872938780544 efficientnet_model.py:146] round_filter input=192 output=208
I0813 22:05:07.303452 139872938780544 efficientnet_model.py:146] round_filter input=192 output=208
I0813 22:05:07.303619 139872938780544 efficientnet_model.py:146] round_filter input=320 output=352
I0813 22:05:07.608087 139872938780544 efficientnet_model.py:146] round_filter input=1280 output=1408
I0813 22:05:07.677526 139872938780544 efficientnet_model.py:459] Building model efficientnet with params ModelConfig(width_coefficient=1.1, depth_coefficient=1.2, resolution=260, dropout_rate=0.3, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
I0813 22:05:07.776920 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet EfficientNet backbone version: efficientnet-b3
I0813 22:05:07.777065 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:145] EfficientDet BiFPN num filters: 160
I0813 22:05:07.777143 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:147] EfficientDet BiFPN num iterations: 6
I0813 22:05:07.783599 139872938780544 efficientnet_model.py:146] round_filter input=32 output=40
I0813 22:05:07.820799 139872938780544 efficientnet_model.py:146] round_filter input=32 output=40
I0813 22:05:07.820909 139872938780544 efficientnet_model.py:146] round_filter input=16 output=24
I0813 22:05:08.060191 139872938780544 efficientnet_model.py:146] round_filter input=16 output=24
I0813 22:05:08.060348 139872938780544 efficientnet_model.py:146] round_filter input=24 output=32
I0813 22:05:08.581792 139872938780544 efficientnet_model.py:146] round_filter input=24 output=32
I0813 22:05:08.581972 139872938780544 efficientnet_model.py:146] round_filter input=40 output=48
I0813 22:05:09.056325 139872938780544 efficientnet_model.py:146] round_filter input=40 output=48
I0813 22:05:09.056490 139872938780544 efficientnet_model.py:146] round_filter input=80 output=96
I0813 22:05:10.134101 139872938780544 efficientnet_model.py:146] round_filter input=80 output=96
I0813 22:05:10.134274 139872938780544 efficientnet_model.py:146] round_filter input=112 output=136
I0813 22:05:10.939477 139872938780544 efficientnet_model.py:146] round_filter input=112 output=136
I0813 22:05:10.939663 139872938780544 efficientnet_model.py:146] round_filter input=192 output=232
I0813 22:05:11.912467 139872938780544 efficientnet_model.py:146] round_filter input=192 output=232
I0813 22:05:11.912634 139872938780544 efficientnet_model.py:146] round_filter input=320 output=384
I0813 22:05:12.235164 139872938780544 efficientnet_model.py:146] round_filter input=1280 output=1536
I0813 22:05:12.295952 139872938780544 efficientnet_model.py:459] Building model efficientnet with params ModelConfig(width_coefficient=1.2, depth_coefficient=1.4, resolution=300, dropout_rate=0.3, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
I0813 22:05:12.401250 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet EfficientNet backbone version: efficientnet-b4
I0813 22:05:12.401399 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:145] EfficientDet BiFPN num filters: 224
I0813 22:05:12.401484 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:147] EfficientDet BiFPN num iterations: 7
I0813 22:05:12.407509 139872938780544 efficientnet_model.py:146] round_filter input=32 output=48
I0813 22:05:12.445554 139872938780544 efficientnet_model.py:146] round_filter input=32 output=48
I0813 22:05:12.445666 139872938780544 efficientnet_model.py:146] round_filter input=16 output=24
I0813 22:05:12.688036 139872938780544 efficientnet_model.py:146] round_filter input=16 output=24
I0813 22:05:12.688200 139872938780544 efficientnet_model.py:146] round_filter input=24 output=32
I0813 22:05:13.331441 139872938780544 efficientnet_model.py:146] round_filter input=24 output=32
I0813 22:05:13.331609 139872938780544 efficientnet_model.py:146] round_filter input=40 output=56
I0813 22:05:13.965910 139872938780544 efficientnet_model.py:146] round_filter input=40 output=56
I0813 22:05:13.966063 139872938780544 efficientnet_model.py:146] round_filter input=80 output=112
I0813 22:05:14.946065 139872938780544 efficientnet_model.py:146] round_filter input=80 output=112
I0813 22:05:14.946243 139872938780544 efficientnet_model.py:146] round_filter input=112 output=160
I0813 22:05:15.900922 139872938780544 efficientnet_model.py:146] round_filter input=112 output=160
I0813 22:05:15.901098 139872938780544 efficientnet_model.py:146] round_filter input=192 output=272
I0813 22:05:17.511473 139872938780544 efficientnet_model.py:146] round_filter input=192 output=272
I0813 22:05:17.511679 139872938780544 efficientnet_model.py:146] round_filter input=320 output=448
I0813 22:05:17.825671 139872938780544 efficientnet_model.py:146] round_filter input=1280 output=1792
I0813 22:05:17.884435 139872938780544 efficientnet_model.py:459] Building model efficientnet with params ModelConfig(width_coefficient=1.4, depth_coefficient=1.8, resolution=380, dropout_rate=0.4, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
I0813 22:05:18.003490 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet EfficientNet backbone version: efficientnet-b5
I0813 22:05:18.003627 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:145] EfficientDet BiFPN num filters: 288
I0813 22:05:18.003700 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:147] EfficientDet BiFPN num iterations: 7
I0813 22:05:18.010117 139872938780544 efficientnet_model.py:146] round_filter input=32 output=48
I0813 22:05:18.046892 139872938780544 efficientnet_model.py:146] round_filter input=32 output=48
I0813 22:05:18.047028 139872938780544 efficientnet_model.py:146] round_filter input=16 output=24
I0813 22:05:18.420717 139872938780544 efficientnet_model.py:146] round_filter input=16 output=24
I0813 22:05:18.420901 139872938780544 efficientnet_model.py:146] round_filter input=24 output=40
I0813 22:05:19.218782 139872938780544 efficientnet_model.py:146] round_filter input=24 output=40
I0813 22:05:19.218967 139872938780544 efficientnet_model.py:146] round_filter input=40 output=64
I0813 22:05:20.022358 139872938780544 efficientnet_model.py:146] round_filter input=40 output=64
I0813 22:05:20.022526 139872938780544 efficientnet_model.py:146] round_filter input=80 output=128
I0813 22:05:21.160390 139872938780544 efficientnet_model.py:146] round_filter input=80 output=128
I0813 22:05:21.160571 139872938780544 efficientnet_model.py:146] round_filter input=112 output=176
I0813 22:05:22.282331 139872938780544 efficientnet_model.py:146] round_filter input=112 output=176
I0813 22:05:22.282510 139872938780544 efficientnet_model.py:146] round_filter input=192 output=304
I0813 22:05:23.723302 139872938780544 efficientnet_model.py:146] round_filter input=192 output=304
I0813 22:05:23.723539 139872938780544 efficientnet_model.py:146] round_filter input=320 output=512
I0813 22:05:24.213389 139872938780544 efficientnet_model.py:146] round_filter input=1280 output=2048
I0813 22:05:24.274345 139872938780544 efficientnet_model.py:459] Building model efficientnet with params ModelConfig(width_coefficient=1.6, depth_coefficient=2.2, resolution=456, dropout_rate=0.4, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
I0813 22:05:24.780803 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet EfficientNet backbone version: efficientnet-b6
I0813 22:05:24.780981 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:145] EfficientDet BiFPN num filters: 384
I0813 22:05:24.781064 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:147] EfficientDet BiFPN num iterations: 8
I0813 22:05:24.787648 139872938780544 efficientnet_model.py:146] round_filter input=32 output=56
I0813 22:05:24.828489 139872938780544 efficientnet_model.py:146] round_filter input=32 output=56
I0813 22:05:24.828603 139872938780544 efficientnet_model.py:146] round_filter input=16 output=32
I0813 22:05:25.217386 139872938780544 efficientnet_model.py:146] round_filter input=16 output=32
I0813 22:05:25.217556 139872938780544 efficientnet_model.py:146] round_filter input=24 output=40
I0813 22:05:26.198675 139872938780544 efficientnet_model.py:146] round_filter input=24 output=40
I0813 22:05:26.198870 139872938780544 efficientnet_model.py:146] round_filter input=40 output=72
I0813 22:05:27.187781 139872938780544 efficientnet_model.py:146] round_filter input=40 output=72
I0813 22:05:27.187972 139872938780544 efficientnet_model.py:146] round_filter input=80 output=144
I0813 22:05:28.522635 139872938780544 efficientnet_model.py:146] round_filter input=80 output=144
I0813 22:05:28.522819 139872938780544 efficientnet_model.py:146] round_filter input=112 output=200
I0813 22:05:29.841706 139872938780544 efficientnet_model.py:146] round_filter input=112 output=200
I0813 22:05:29.841918 139872938780544 efficientnet_model.py:146] round_filter input=192 output=344
I0813 22:05:31.677858 139872938780544 efficientnet_model.py:146] round_filter input=192 output=344
I0813 22:05:31.678025 139872938780544 efficientnet_model.py:146] round_filter input=320 output=576
I0813 22:05:32.170427 139872938780544 efficientnet_model.py:146] round_filter input=1280 output=2304
I0813 22:05:32.233286 139872938780544 efficientnet_model.py:459] Building model efficientnet with params ModelConfig(width_coefficient=1.8, depth_coefficient=2.6, resolution=528, dropout_rate=0.5, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
I0813 22:05:32.406224 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet EfficientNet backbone version: efficientnet-b7
I0813 22:05:32.406397 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:145] EfficientDet BiFPN num filters: 384
I0813 22:05:32.406474 139872938780544 ssd_efficientnet_bifpn_feature_extractor.py:147] EfficientDet BiFPN num iterations: 8
I0813 22:05:32.412995 139872938780544 efficientnet_model.py:146] round_filter input=32 output=64
I0813 22:05:32.457632 139872938780544 efficientnet_model.py:146] round_filter input=32 output=64
I0813 22:05:32.457749 139872938780544 efficientnet_model.py:146] round_filter input=16 output=32
I0813 22:05:32.966125 139872938780544 efficientnet_model.py:146] round_filter input=16 output=32
I0813 22:05:32.966293 139872938780544 efficientnet_model.py:146] round_filter input=24 output=48
I0813 22:05:34.144050 139872938780544 efficientnet_model.py:146] round_filter input=24 output=48
I0813 22:05:34.144227 139872938780544 efficientnet_model.py:146] round_filter input=40 output=80
I0813 22:05:35.793898 139872938780544 efficientnet_model.py:146] round_filter input=40 output=80
I0813 22:05:35.794089 139872938780544 efficientnet_model.py:146] round_filter input=80 output=160
I0813 22:05:37.494517 139872938780544 efficientnet_model.py:146] round_filter input=80 output=160
I0813 22:05:37.494681 139872938780544 efficientnet_model.py:146] round_filter input=112 output=224
I0813 22:05:39.285119 139872938780544 efficientnet_model.py:146] round_filter input=112 output=224
I0813 22:05:39.285302 139872938780544 efficientnet_model.py:146] round_filter input=192 output=384
I0813 22:05:41.543824 139872938780544 efficientnet_model.py:146] round_filter input=192 output=384
I0813 22:05:41.543990 139872938780544 efficientnet_model.py:146] round_filter input=320 output=640
I0813 22:05:42.267034 139872938780544 efficientnet_model.py:146] round_filter input=1280 output=2560
I0813 22:05:42.331136 139872938780544 efficientnet_model.py:459] Building model efficientnet with params ModelConfig(width_coefficient=2.0, depth_coefficient=3.1, resolution=600, dropout_rate=0.5, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
[       OK ] ModelBuilderTF2Test.test_create_ssd_models_from_config
[ RUN      ] ModelBuilderTF2Test.test_invalid_faster_rcnn_batchnorm_update
[       OK ] ModelBuilderTF2Test.test_invalid_faster_rcnn_batchnorm_update
[ RUN      ] ModelBuilderTF2Test.test_invalid_first_stage_nms_iou_threshold
[       OK ] ModelBuilderTF2Test.test_invalid_first_stage_nms_iou_threshold
[ RUN      ] ModelBuilderTF2Test.test_invalid_model_config_proto
[       OK ] ModelBuilderTF2Test.test_invalid_model_config_proto
[ RUN      ] ModelBuilderTF2Test.test_invalid_second_stage_batch_size
[       OK ] ModelBuilderTF2Test.test_invalid_second_stage_batch_size
[ RUN      ] ModelBuilderTF2Test.test_session
[  SKIPPED ] ModelBuilderTF2Test.test_session
[ RUN      ] ModelBuilderTF2Test.test_unknown_faster_rcnn_feature_extractor
[       OK ] ModelBuilderTF2Test.test_unknown_faster_rcnn_feature_extractor
[ RUN      ] ModelBuilderTF2Test.test_unknown_meta_architecture
[       OK ] ModelBuilderTF2Test.test_unknown_meta_architecture
[ RUN      ] ModelBuilderTF2Test.test_unknown_ssd_feature_extractor
[       OK ] ModelBuilderTF2Test.test_unknown_ssd_feature_extractor
----------------------------------------------------------------------
Ran 20 tests in 55.667s

OK (skipped=1)

Create a function to plot detections using colab visualization

In [ ]:
def load_image_into_numpy_array(path):
  """Load an image from file into a numpy array.

  Puts image into numpy array to feed into tensorflow graph.
  Note that by convention we put it into a numpy array with shape
  (height, width, channels), where channels=3 for RGB.

  Args:
    path: a file path.

  Returns:
    uint8 numpy array with shape (img_height, img_width, 3)
  """
  img_data = tf.io.gfile.GFile(path, 'rb').read()
  image = Image.open(BytesIO(img_data))
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)

def plot_detections(image_np,
                    boxes,
                    classes,
                    scores,
                    category_index,
                    figsize=(12, 16),
                    image_name=None):
  """Wrapper function to visualize detections.

  Args:
    image_np: uint8 numpy array with shape (img_height, img_width, 3)
    boxes: a numpy array of shape [N, 4]
    classes: a numpy array of shape [N]. Note that class indices are 1-based,
      and match the keys in the label map.
    scores: a numpy array of shape [N] or None.  If scores=None, then
      this function assumes that the boxes to be plotted are groundtruth
      boxes and plot all boxes as black with no classes or scores.
    category_index: a dict containing category dictionaries (each holding
      category index `id` and category name `name`) keyed by category indices.
    figsize: size for the figure.
    image_name: a name for the image file.
  """
  image_np_with_annotations = image_np.copy()
  viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_annotations,
      boxes,
      classes,
      scores,
      category_index,
      use_normalized_coordinates=True,
      min_score_thresh=0.8)
  if image_name:
    plt.imsave(image_name, image_np_with_annotations)
  else:
    plt.imshow(image_np_with_annotations)

Prepare Tensorflow 2 Object Detection Training Data

Roboflow automatically creates our TFRecord and label_map files that we need!

Generating your own TFRecords is the only step you need to change for your own custom dataset.

Because we need one TFRecord file for our training data, and one TFRecord file for our test data, we'll create two separate datasets in Roboflow and generate one set of TFRecords for each.

To create a dataset in Roboflow and generate TFRecords, follow this step-by-step guide.

I would also recommend this video, if you are more comfortable learning through videos.

One thing to note is that we are not using this notebook to label our data. Make sure you have your image in jpeg, png, or bmp format and annotations in xml, json, txt or any other supported format. With both these types of files, you can upload it on your Roboflow account.

Labelling can be done using various softwares such as LabelBox, B-Box Label Tool, and other easily available softwares.

Finally, when you are done with the above steps, you will be able to download your dataset or get a link to your data which will be divided into train, valid and test tf records along with label map files.

Download Data

Below you can see a url link which is the download link for our smoke dataset. If you follow the above guideline on Roboflow, things would be pretty clear by now.

Replace your link in case you are using your own dataset. If not, for this notebook leave it as it is.

In [ ]:
#Downloading data from Roboflow
#UPDATE THIS LINK - get our data from Roboflow
%cd /content
!curl -L "https://app.roboflow.ai/ds/EVwoZwzA30?key=6OawcH9tOw" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip
/content
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   891  100   891    0     0   1050      0 --:--:-- --:--:-- --:--:--  1050
100 27.4M  100 27.4M    0     0  21.1M      0  0:00:01  0:00:01 --:--:--  113M
Archive:  roboflow.zip
 extracting: test/Smoke.tfrecord     
 extracting: train/Smoke.tfrecord    
 extracting: valid/Smoke.tfrecord    
 extracting: test/Smoke_label_map.pbtxt  
 extracting: train/Smoke_label_map.pbtxt  
 extracting: valid/Smoke_label_map.pbtxt  
 extracting: README.roboflow.txt     

Store TF Records file paths in variables

In [ ]:
# NOTE: Update these TFRecord names from "cells" and "cells_label_map" to your files!
test_record_fname = './valid/Smoke.tfrecord'
train_record_fname = './train/Smoke.tfrecord'
label_map_pbtxt_fname = './train/Smoke_label_map.pbtxt'

Configure Custom TensorFlow2 Object Detection Training Configuration

In this section you can specify any model in the TF2 OD model zoo and set up your training configuration.

We will be using EfficientDet, which is a state of the art object detection model. You will find EfficientDet useful for real time object detection. EfficientDet has an EfficientNet backbone and a custom detection and classification network. EffcientDet is designed to efficiently scale from the smallest model size. The smallest EfficientDet, EfficientDet-D0 has 4 million weight parameters - which is truly tiny. EfficientDets are developed based on the advanced backbone, a new BiFPN, and a new scaling technique:

  • Backbone: we employ EfficientNets as our backbone networks.
  • BiFPN: we propose BiFPN, a bi-directional feature network enhanced with fast normalization, which enables easy and fast feature fusion.
  • Scaling: we use a single compound scaling factor to govern the depth, width, and resolution for all backbone, feature & prediction networks.

EfficientDet infers in 30ms in this distribution and is considered a realtime model. You can store EfficientDet with only 17 mb of storage.

EfficientDet performed state of the art on COCO when it was released. We found that EfficientDet performs slightly better than YOLOv3. EfficientDet is an open source neural network model for the computer vision task of image detection.

Read this paper to know the architecture in detail.

In [ ]:
##Change chosen model to deploy different models available in the TF2 object detection zoo
MODELS_CONFIG = {
    'efficientdet-d0': {
        'model_name': 'efficientdet_d0_coco17_tpu-32',
        'base_pipeline_file': 'ssd_efficientdet_d0_512x512_coco17_tpu-8.config',
        'pretrained_checkpoint': 'efficientdet_d0_coco17_tpu-32.tar.gz',
        'batch_size': 16
    },
    'efficientdet-d1': {
        'model_name': 'efficientdet_d1_coco17_tpu-32',
        'base_pipeline_file': 'ssd_efficientdet_d1_640x640_coco17_tpu-8.config',
        'pretrained_checkpoint': 'efficientdet_d1_coco17_tpu-32.tar.gz',
        'batch_size': 16
    },
    'efficientdet-d2': {
        'model_name': 'efficientdet_d2_coco17_tpu-32',
        'base_pipeline_file': 'ssd_efficientdet_d2_768x768_coco17_tpu-8.config',
        'pretrained_checkpoint': 'efficientdet_d2_coco17_tpu-32.tar.gz',
        'batch_size': 16
    },
        'efficientdet-d3': {
        'model_name': 'efficientdet_d3_coco17_tpu-32',
        'base_pipeline_file': 'ssd_efficientdet_d3_896x896_coco17_tpu-32.config',
        'pretrained_checkpoint': 'efficientdet_d3_coco17_tpu-32.tar.gz',
        'batch_size': 16
    }
}

#in this tutorial we implement the lightweight, smallest state of the art efficientdet model
#if you want to scale up tot larger efficientdet models you will likely need more compute!
chosen_model = 'efficientdet-d0'

num_steps = 10000 #The more steps, the longer the training. Increase if your loss function is still decreasing and validation metrics are increasing. 
num_eval_steps = 500 #Perform evaluation after so many steps

model_name = MODELS_CONFIG[chosen_model]['model_name']
pretrained_checkpoint = MODELS_CONFIG[chosen_model]['pretrained_checkpoint']
base_pipeline_file = MODELS_CONFIG[chosen_model]['base_pipeline_file']
batch_size = MODELS_CONFIG[chosen_model]['batch_size'] #if you can fit a large batch in memory, it may speed up your training
In [ ]:
#download pretrained weights
%mkdir ./models/research/deploy/
%cd ./models/research/deploy/
import tarfile
download_tar = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/' + pretrained_checkpoint

!wget {download_tar}
tar = tarfile.open(pretrained_checkpoint)
tar.extractall()
tar.close()
mkdir: cannot create directory ‘./models/research/deploy/’: File exists
./models/research/deploy
--2020-08-13 22:06:19--  http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d0_coco17_tpu-32.tar.gz
Resolving download.tensorflow.org (download.tensorflow.org)... 74.125.20.128, 2607:f8b0:400e:c07::80
Connecting to download.tensorflow.org (download.tensorflow.org)|74.125.20.128|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 30736482 (29M) [application/x-tar]
Saving to: ‘efficientdet_d0_coco17_tpu-32.tar.gz.1’

efficientdet_d0_coc 100%[===================>]  29.31M  85.1MB/s    in 0.3s    

2020-08-13 22:06:20 (85.1 MB/s) - ‘efficientdet_d0_coco17_tpu-32.tar.gz.1’ saved [30736482/30736482]

In [ ]:
#download base training configuration file
%cd ./models/research/deploy
download_config = 'https://raw.githubusercontent.com/tensorflow/models/master/research/object_detection/configs/tf2/' + base_pipeline_file
!wget {download_config}
./models/research/deploy
--2020-08-13 22:06:36--  https://raw.githubusercontent.com/tensorflow/models/master/research/object_detection/configs/tf2/ssd_efficientdet_d0_512x512_coco17_tpu-8.config
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4630 (4.5K) [text/plain]
Saving to: ‘ssd_efficientdet_d0_512x512_coco17_tpu-8.config’

ssd_efficientdet_d0 100%[===================>]   4.52K  --.-KB/s    in 0s      

2020-08-13 22:06:36 (56.2 MB/s) - ‘ssd_efficientdet_d0_512x512_coco17_tpu-8.config’ saved [4630/4630]

In [ ]:
#prepare
pipeline_fname = './models/research/deploy/' + base_pipeline_file
fine_tune_checkpoint = './models/research/deploy/' + model_name + '/checkpoint/ckpt-0'

def get_num_classes(pbtxt_fname):
    from object_detection.utils import label_map_util
    label_map = label_map_util.load_labelmap(pbtxt_fname)
    categories = label_map_util.convert_label_map_to_categories(
        label_map, max_num_classes=90, use_display_name=True)
    category_index = label_map_util.create_category_index(categories)
    return len(category_index.keys())
num_classes = get_num_classes(label_map_pbtxt_fname)
In [ ]:
#write custom configuration file by slotting our dataset, model checkpoint, and training parameters into the base pipeline file

import re

%cd ./models/research/deploy
print('writing custom configuration file')

with open(pipeline_fname) as f:
    s = f.read()
with open('pipeline_file.config', 'w') as f:
    
    # fine_tune_checkpoint
    s = re.sub('fine_tune_checkpoint: ".*?"',
               'fine_tune_checkpoint: "{}"'.format(fine_tune_checkpoint), s)
    
    # tfrecord files train and test.
    s = re.sub(
        '(input_path: ".*?)(PATH_TO_BE_CONFIGURED/train)(.*?")', 'input_path: "{}"'.format(train_record_fname), s)
    s = re.sub(
        '(input_path: ".*?)(PATH_TO_BE_CONFIGURED/val)(.*?")', 'input_path: "{}"'.format(test_record_fname), s)

    # label_map_path
    s = re.sub(
        'label_map_path: ".*?"', 'label_map_path: "{}"'.format(label_map_pbtxt_fname), s)

    # Set training batch_size.
    s = re.sub('batch_size: [0-9]+',
               'batch_size: {}'.format(batch_size), s)

    # Set training steps, num_steps
    s = re.sub('num_steps: [0-9]+',
               'num_steps: {}'.format(num_steps), s)
    
    # Set number of classes num_classes.
    s = re.sub('num_classes: [0-9]+',
               'num_classes: {}'.format(num_classes), s)
    
    #fine-tune checkpoint type
    s = re.sub(
        'fine_tune_checkpoint_type: "classification"', 'fine_tune_checkpoint_type: "{}"'.format('detection'), s)
        
    f.write(s)
./models/research/deploy
writing custom configuration file

Sanity check to verify the training configuration

In [ ]:
%cat ./models/research/deploy/pipeline_file.config
# SSD with EfficientNet-b0 + BiFPN feature extractor,
# shared box predictor and focal loss (a.k.a EfficientDet-d0).
# See EfficientDet, Tan et al, https://arxiv.org/abs/1911.09070
# See Lin et al, https://arxiv.org/abs/1708.02002
# Trained on COCO, initialized from an EfficientNet-b0 checkpoint.
#
# Train on TPU-8

model {
 ssd {
   inplace_batchnorm_update: true
   freeze_batchnorm: false
   num_classes: 1
   add_background_class: false
   box_coder {
     faster_rcnn_box_coder {
       y_scale: 10.0
       x_scale: 10.0
       height_scale: 5.0
       width_scale: 5.0
     }
   }
   matcher {
     argmax_matcher {
       matched_threshold: 0.5
       unmatched_threshold: 0.5
       ignore_thresholds: false
       negatives_lower_than_unmatched: true
       force_match_for_each_row: true
       use_matmul_gather: true
     }
   }
   similarity_calculator {
     iou_similarity {
     }
   }
   encode_background_as_zeros: true
   anchor_generator {
     multiscale_anchor_generator {
       min_level: 3
       max_level: 7
       anchor_scale: 4.0
       aspect_ratios: [1.0, 2.0, 0.5]
       scales_per_octave: 3
     }
   }
   image_resizer {
     keep_aspect_ratio_resizer {
       min_dimension: 512
       max_dimension: 512
       pad_to_max_dimension: true
       }
   }
   box_predictor {
     weight_shared_convolutional_box_predictor {
       depth: 64
       class_prediction_bias_init: -4.6
       conv_hyperparams {
         force_use_bias: true
         activation: SWISH
         regularizer {
           l2_regularizer {
             weight: 0.00004
           }
         }
         initializer {
           random_normal_initializer {
             stddev: 0.01
             mean: 0.0
           }
         }
         batch_norm {
           scale: true
           decay: 0.99
           epsilon: 0.001
         }
       }
       num_layers_before_predictor: 3
       kernel_size: 3
       use_depthwise: true
     }
   }
   feature_extractor {
     type: 'ssd_efficientnet-b0_bifpn_keras'
     bifpn {
       min_level: 3
       max_level: 7
       num_iterations: 3
       num_filters: 64
     }
     conv_hyperparams {
       force_use_bias: true
       activation: SWISH
       regularizer {
         l2_regularizer {
           weight: 0.00004
         }
       }
       initializer {
         truncated_normal_initializer {
           stddev: 0.03
           mean: 0.0
         }
       }
       batch_norm {
         scale: true,
         decay: 0.99,
         epsilon: 0.001,
       }
     }
   }
   loss {
     classification_loss {
       weighted_sigmoid_focal {
         alpha: 0.25
         gamma: 1.5
       }
     }
     localization_loss {
       weighted_smooth_l1 {
       }
     }
     classification_weight: 1.0
     localization_weight: 1.0
   }
   normalize_loss_by_num_matches: true
   normalize_loc_loss_by_codesize: true
   post_processing {
     batch_non_max_suppression {
       score_threshold: 1e-8
       iou_threshold: 0.5
       max_detections_per_class: 100
       max_total_detections: 100
     }
     score_converter: SIGMOID
   }
 }
}

train_config: {
 fine_tune_checkpoint: "./models/research/deploy/efficientdet_d0_coco17_tpu-32/checkpoint/ckpt-0"
 fine_tune_checkpoint_version: V2
 fine_tune_checkpoint_type: "detection"
 batch_size: 16
 sync_replicas: true
 startup_delay_steps: 0
 replicas_to_aggregate: 8
 use_bfloat16: true
 num_steps: 20000
 data_augmentation_options {
   random_horizontal_flip {
   }
 }
 data_augmentation_options {
   random_scale_crop_and_pad_to_square {
     output_size: 512
     scale_min: 0.1
     scale_max: 2.0
   }
 }
 optimizer {
   momentum_optimizer: {
     learning_rate: {
       manual_step_learning_rate {
         initial_learning_rate: 0.0003
         schedule {
           step: 4000
           learning_rate: 0.003
         }
         schedule {
           step: 5000
           learning_rate: 0.0003
         }
       }
     }
     momentum_optimizer_value: 0.9
   }
   use_moving_average: false
 }
 max_number_of_boxes: 100
 unpad_groundtruth_tensors: false
}

train_input_reader: {
 label_map_path: "./train/Smoke_label_map.pbtxt"
 tf_record_input_reader {
   input_path: "./train/Smoke.tfrecord"
 }
}

eval_config: {
 metrics_set: "coco_detection_metrics"
 use_moving_averages: false
 batch_size: 16;
}

eval_input_reader: {
 label_map_path: "./train/Smoke_label_map.pbtxt"
 shuffle: false
 num_epochs: 1
 tf_record_input_reader {
   input_path: "./valid/Smoke.tfrecord"
 }
}

If everything looks good above, go ahead and execute the below cell to store your config file into a pipeline_file variable and create a training directory for training.

We would suggest to make appropriate changes to your config file, especially the train config section inside pipeline config file. Experimenting with different learning rates and tuning your model is the key to achieving exceptional accuracy.

We have used the concept of cyclical learning rates, which is one of the good techniques to find the optimal learning rate.

To learn about hyperparameter tuning and selecting the optimal learning rate, visit this link if your loss is not decreasing as expected during training. For now, lets move ahead.

In [ ]:
pipeline_file = './models/research/deploy/pipeline_file.config'
model_dir = './training/'

Train TF2 Smoke Detector

  • pipeline_file: defined above in writing custom training configuration
  • model_dir: the location tensorboard logs and saved model checkpoints will save to
  • num_train_steps: how long to train for
  • num_eval_steps: perform eval on validation set after this many steps
In [ ]:
!python ./models/research/object_detection/model_main_tf2.py \
    --pipeline_config_path={pipeline_file} \
    --model_dir={model_dir} \
    --alsologtostderr \
    --num_train_steps={num_steps} \
    --sample_1_of_n_eval_examples=1 \
    --num_eval_steps={num_eval_steps}
2020-08-13 22:10:22.438571: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-08-13 22:10:22.473745: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:10:22.474319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.73GiB deviceMemoryBandwidth: 298.08GiB/s
2020-08-13 22:10:22.474580: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-08-13 22:10:22.476228: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-08-13 22:10:22.484318: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-08-13 22:10:22.484671: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-08-13 22:10:22.486904: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-08-13 22:10:22.487865: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-08-13 22:10:22.491944: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-08-13 22:10:22.492084: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:10:22.492713: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:10:22.493274: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-08-13 22:10:22.493613: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-08-13 22:10:22.498705: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2200000000 Hz
2020-08-13 22:10:22.498913: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x22b0bc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-08-13 22:10:22.498943: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-08-13 22:10:22.608091: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:10:22.608771: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x22b0a00 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-08-13 22:10:22.608808: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Tesla T4, Compute Capability 7.5
2020-08-13 22:10:22.608996: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:10:22.609542: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.73GiB deviceMemoryBandwidth: 298.08GiB/s
2020-08-13 22:10:22.609607: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-08-13 22:10:22.609633: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-08-13 22:10:22.609658: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-08-13 22:10:22.609680: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-08-13 22:10:22.609700: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-08-13 22:10:22.609720: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-08-13 22:10:22.609741: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-08-13 22:10:22.609832: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:10:22.610400: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:10:22.610918: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-08-13 22:10:22.610991: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-08-13 22:10:22.612263: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-08-13 22:10:22.612294: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0 
2020-08-13 22:10:22.612308: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N 
2020-08-13 22:10:22.612432: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:10:22.613012: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-13 22:10:22.613489: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2020-08-13 22:10:22.613534: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14071 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I0813 22:10:22.615371 140006873859968 mirrored_strategy.py:500] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: 10000
I0813 22:10:22.620417 140006873859968 config_util.py:552] Maybe overwriting train_steps: 10000
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I0813 22:10:22.620616 140006873859968 config_util.py:552] Maybe overwriting use_bfloat16: False
I0813 22:10:22.635978 140006873859968 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet EfficientNet backbone version: efficientnet-b0
I0813 22:10:22.636117 140006873859968 ssd_efficientnet_bifpn_feature_extractor.py:145] EfficientDet BiFPN num filters: 64
I0813 22:10:22.636192 140006873859968 ssd_efficientnet_bifpn_feature_extractor.py:147] EfficientDet BiFPN num iterations: 3
I0813 22:10:22.648280 140006873859968 efficientnet_model.py:146] round_filter input=32 output=32
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.320855 140006873859968 cross_device_ops.py:440] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.323671 140006873859968 cross_device_ops.py:440] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.331579 140006873859968 cross_device_ops.py:440] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.333888 140006873859968 cross_device_ops.py:440] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.362882 140006873859968 efficientnet_model.py:146] round_filter input=32 output=32
I0813 22:10:23.362992 140006873859968 efficientnet_model.py:146] round_filter input=16 output=16
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.386921 140006873859968 cross_device_ops.py:440] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.389519 140006873859968 cross_device_ops.py:440] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.398286 140006873859968 cross_device_ops.py:440] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.401017 140006873859968 cross_device_ops.py:440] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.471017 140006873859968 cross_device_ops.py:440] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.473498 140006873859968 cross_device_ops.py:440] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0813 22:10:23.500891 140006873859968 efficientnet_model.py:146] round_filter input=16 output=16
I0813 22:10:23.500995 140006873859968 efficientnet_model.py:146] round_filter input=24 output=24
I0813 22:10:23.970150 140006873859968 efficientnet_model.py:146] round_filter input=24 output=24
I0813 22:10:23.970327 140006873859968 efficientnet_model.py:146] round_filter input=40 output=40
I0813 22:10:24.342478 140006873859968 efficientnet_model.py:146] round_filter input=40 output=40
I0813 22:10:24.342643 140006873859968 efficientnet_model.py:146] round_filter input=80 output=80
I0813 22:10:24.923755 140006873859968 efficientnet_model.py:146] round_filter input=80 output=80
I0813 22:10:24.923956 140006873859968 efficientnet_model.py:146] round_filter input=112 output=112
I0813 22:10:25.486908 140006873859968 efficientnet_model.py:146] round_filter input=112 output=112
I0813 22:10:25.487080 140006873859968 efficientnet_model.py:146] round_filter input=192 output=192
I0813 22:10:26.255973 140006873859968 efficientnet_model.py:146] round_filter input=192 output=192
I0813 22:10:26.256170 140006873859968 efficientnet_model.py:146] round_filter input=320 output=320
I0813 22:10:26.431498 140006873859968 efficientnet_model.py:146] round_filter input=1280 output=1280
I0813 22:10:26.501837 140006873859968 efficientnet_model.py:459] Building model efficientnet with params ModelConfig(width_coefficient=1.0, depth_coefficient=1.0, resolution=224, dropout_rate=0.2, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W0813 22:10:26.685597 140006873859968 dataset_builder.py:83] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_deterministic`.
W0813 22:10:26.688208 140006873859968 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_deterministic`.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/object_detection/builders/dataset_builder.py:175: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W0813 22:10:26.703655 140006873859968 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/object_detection/builders/dataset_builder.py:175: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/object_detection/inputs.py:79: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W0813 22:10:37.186366 140006873859968 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/object_detection/inputs.py:79: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/object_detection/inputs.py:259: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W0813 22:10:43.386791 140006873859968 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/object_detection/inputs.py:259: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
2020-08-13 22:11:22.628086: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-08-13 22:11:24.050877: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._groundtruth_lists
W0813 22:11:33.952774 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._groundtruth_lists
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor
W0813 22:11:33.953135 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._batched_prediction_tensor_names
W0813 22:11:33.953222 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._batched_prediction_tensor_names
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head
W0813 22:11:33.953294 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads
W0813 22:11:33.953363 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._sorted_head_names
W0813 22:11:33.953426 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._sorted_head_names
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers
W0813 22:11:33.953487 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads
W0813 22:11:33.953549 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers
W0813 22:11:33.953609 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head._box_encoder_layers
W0813 22:11:33.953669 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head._box_encoder_layers
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background
W0813 22:11:33.953729 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers.0
W0813 22:11:33.953814 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers.0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers.1
W0813 22:11:33.953877 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers.2
W0813 22:11:33.953938 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers.3
W0813 22:11:33.953997 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers.3
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers.4
W0813 22:11:33.954056 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._additional_projection_layers.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings
W0813 22:11:33.954122 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background
W0813 22:11:33.954182 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower
W0813 22:11:33.954242 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower
W0813 22:11:33.954301 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head._box_encoder_layers.0
W0813 22:11:33.954392 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head._box_encoder_layers.0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers
W0813 22:11:33.954462 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0
W0813 22:11:33.954530 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1
W0813 22:11:33.954591 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2
W0813 22:11:33.954651 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3
W0813 22:11:33.954711 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4
W0813 22:11:33.954782 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0
W0813 22:11:33.954849 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1
W0813 22:11:33.954911 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2
W0813 22:11:33.954971 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3
W0813 22:11:33.955032 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4
W0813 22:11:33.955091 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.0
W0813 22:11:33.955151 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.1
W0813 22:11:33.955210 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.2
W0813 22:11:33.955273 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.0
W0813 22:11:33.955334 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.1
W0813 22:11:33.955394 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.2
W0813 22:11:33.955453 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head._box_encoder_layers.0.depthwise_kernel
W0813 22:11:33.955579 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head._box_encoder_layers.0.depthwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head._box_encoder_layers.0.pointwise_kernel
W0813 22:11:33.955641 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head._box_encoder_layers.0.pointwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head._box_encoder_layers.0.bias
W0813 22:11:33.955715 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._box_prediction_head._box_encoder_layers.0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers.0
W0813 22:11:33.955796 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers.0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1
W0813 22:11:33.955859 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.2
W0813 22:11:33.955920 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4
W0813 22:11:33.955981 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.5
W0813 22:11:33.956041 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7
W0813 22:11:33.956102 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.8
W0813 22:11:33.956162 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.8
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1
W0813 22:11:33.956221 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.2
W0813 22:11:33.956281 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4
W0813 22:11:33.956341 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.5
W0813 22:11:33.956401 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7
W0813 22:11:33.956460 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.8
W0813 22:11:33.956519 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.8
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1
W0813 22:11:33.956597 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.2
W0813 22:11:33.956660 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4
W0813 22:11:33.956723 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.5
W0813 22:11:33.956809 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7
W0813 22:11:33.956873 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.8
W0813 22:11:33.956932 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.8
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1
W0813 22:11:33.956991 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.2
W0813 22:11:33.957051 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4
W0813 22:11:33.957111 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.5
W0813 22:11:33.957170 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7
W0813 22:11:33.965844 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.8
W0813 22:11:33.965979 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.8
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1
W0813 22:11:33.966072 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.2
W0813 22:11:33.966162 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4
W0813 22:11:33.966245 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.5
W0813 22:11:33.966329 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7
W0813 22:11:33.966406 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.8
W0813 22:11:33.966487 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.8
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1
W0813 22:11:33.966569 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.2
W0813 22:11:33.966656 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4
W0813 22:11:33.966740 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.5
W0813 22:11:33.966864 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7
W0813 22:11:33.966949 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.8
W0813 22:11:33.967026 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.8
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1
W0813 22:11:33.967113 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.2
W0813 22:11:33.967194 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4
W0813 22:11:33.967281 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.5
W0813 22:11:33.967367 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7
W0813 22:11:33.967450 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.8
W0813 22:11:33.967533 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.8
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1
W0813 22:11:33.967616 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.2
W0813 22:11:33.967697 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4
W0813 22:11:33.967813 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.5
W0813 22:11:33.967904 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7
W0813 22:11:33.967987 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.8
W0813 22:11:33.968070 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.8
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1
W0813 22:11:33.968153 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.2
W0813 22:11:33.968234 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4
W0813 22:11:33.968316 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.5
W0813 22:11:33.968399 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7
W0813 22:11:33.968482 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.8
W0813 22:11:33.968564 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.8
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1
W0813 22:11:33.968645 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.2
W0813 22:11:33.968727 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4
W0813 22:11:33.968846 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.5
W0813 22:11:33.968931 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7
W0813 22:11:33.969013 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.8
W0813 22:11:33.969096 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.8
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.0.depthwise_kernel
W0813 22:11:33.969179 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.0.depthwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.0.pointwise_kernel
W0813 22:11:33.969262 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.0.pointwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.0.bias
W0813 22:11:33.969347 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.1.depthwise_kernel
W0813 22:11:33.969431 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.1.depthwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.1.pointwise_kernel
W0813 22:11:33.969514 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.1.pointwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.1.bias
W0813 22:11:33.969598 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.2.depthwise_kernel
W0813 22:11:33.969702 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.2.depthwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.2.pointwise_kernel
W0813 22:11:33.969818 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.2.pointwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.2.bias
W0813 22:11:33.969914 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.BoxPredictionTower.2.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.0.depthwise_kernel
W0813 22:11:33.970003 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.0.depthwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.0.pointwise_kernel
W0813 22:11:33.970091 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.0.pointwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.0.bias
W0813 22:11:33.970178 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.1.depthwise_kernel
W0813 22:11:33.970265 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.1.depthwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.1.pointwise_kernel
W0813 22:11:33.970376 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.1.pointwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.1.bias
W0813 22:11:33.970483 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.2.depthwise_kernel
W0813 22:11:33.970571 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.2.depthwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.2.pointwise_kernel
W0813 22:11:33.970658 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.2.pointwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.2.bias
W0813 22:11:33.970745 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._head_scope_conv_layers.ClassPredictionTower.2.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers.0.depthwise_kernel
W0813 22:11:33.970900 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers.0.depthwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers.0.pointwise_kernel
W0813 22:11:33.971015 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers.0.pointwise_kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers.0.bias
W0813 22:11:33.971108 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._prediction_heads.class_predictions_with_background._class_predictor_layers.0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1.axis
W0813 22:11:33.971229 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1.gamma
W0813 22:11:33.971346 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1.beta
W0813 22:11:33.971435 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1.moving_mean
W0813 22:11:33.971559 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1.moving_variance
W0813 22:11:33.971650 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.1.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4.axis
W0813 22:11:33.971722 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4.gamma
W0813 22:11:33.971810 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4.beta
W0813 22:11:33.971877 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4.moving_mean
W0813 22:11:33.971941 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4.moving_variance
W0813 22:11:33.972006 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.4.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7.axis
W0813 22:11:33.972070 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7.gamma
W0813 22:11:33.972144 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7.beta
W0813 22:11:33.972203 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7.moving_mean
W0813 22:11:33.972263 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7.moving_variance
W0813 22:11:33.972324 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.0.7.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1.axis
W0813 22:11:33.972384 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1.gamma
W0813 22:11:33.972444 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1.beta
W0813 22:11:33.972505 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1.moving_mean
W0813 22:11:33.972564 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1.moving_variance
W0813 22:11:33.972624 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.1.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4.axis
W0813 22:11:33.972683 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4.gamma
W0813 22:11:33.972742 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4.beta
W0813 22:11:33.972820 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4.moving_mean
W0813 22:11:33.972879 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4.moving_variance
W0813 22:11:33.972939 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.4.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7.axis
W0813 22:11:33.972999 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7.gamma
W0813 22:11:33.973059 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7.beta
W0813 22:11:33.973118 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7.moving_mean
W0813 22:11:33.973177 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7.moving_variance
W0813 22:11:33.973237 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.1.7.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1.axis
W0813 22:11:33.973296 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1.gamma
W0813 22:11:33.973356 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1.beta
W0813 22:11:33.973415 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1.moving_mean
W0813 22:11:33.973474 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1.moving_variance
W0813 22:11:33.973534 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.1.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4.axis
W0813 22:11:33.973594 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4.gamma
W0813 22:11:33.973653 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4.beta
W0813 22:11:33.973713 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4.moving_mean
W0813 22:11:33.973781 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4.moving_variance
W0813 22:11:33.973847 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.4.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7.axis
W0813 22:11:33.973920 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7.gamma
W0813 22:11:33.973979 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7.beta
W0813 22:11:33.974065 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7.moving_mean
W0813 22:11:33.974129 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7.moving_variance
W0813 22:11:33.974193 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.2.7.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1.axis
W0813 22:11:33.974257 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1.gamma
W0813 22:11:33.974321 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1.beta
W0813 22:11:33.974385 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1.moving_mean
W0813 22:11:33.974448 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1.moving_variance
W0813 22:11:33.974522 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.1.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4.axis
W0813 22:11:33.974582 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4.gamma
W0813 22:11:33.974642 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4.beta
W0813 22:11:33.974702 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4.moving_mean
W0813 22:11:33.974772 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4.moving_variance
W0813 22:11:33.974838 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.4.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7.axis
W0813 22:11:33.974899 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7.gamma
W0813 22:11:33.974958 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7.beta
W0813 22:11:34.071258 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7.moving_mean
W0813 22:11:34.071405 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7.moving_variance
W0813 22:11:34.071504 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.3.7.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1.axis
W0813 22:11:34.071592 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1.gamma
W0813 22:11:34.071892 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1.beta
W0813 22:11:34.072065 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1.moving_mean
W0813 22:11:34.072195 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1.moving_variance
W0813 22:11:34.072302 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.1.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4.axis
W0813 22:11:34.072403 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4.gamma
W0813 22:11:34.072510 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4.beta
W0813 22:11:34.072603 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4.moving_mean
W0813 22:11:34.072694 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4.moving_variance
W0813 22:11:34.072808 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.4.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7.axis
W0813 22:11:34.072905 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7.gamma
W0813 22:11:34.073011 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7.beta
W0813 22:11:34.073104 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7.moving_mean
W0813 22:11:34.073194 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7.moving_variance
W0813 22:11:34.073291 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.box_encodings.4.7.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1.axis
W0813 22:11:34.073383 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1.gamma
W0813 22:11:34.073479 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1.beta
W0813 22:11:34.073570 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1.moving_mean
W0813 22:11:34.073658 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1.moving_variance
W0813 22:11:34.073747 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.1.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4.axis
W0813 22:11:34.073884 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4.gamma
W0813 22:11:34.074003 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4.beta
W0813 22:11:34.074129 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4.moving_mean
W0813 22:11:34.074226 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4.moving_variance
W0813 22:11:34.074322 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.4.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7.axis
W0813 22:11:34.074417 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7.gamma
W0813 22:11:34.074513 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7.beta
W0813 22:11:34.074615 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7.moving_mean
W0813 22:11:34.074704 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7.moving_variance
W0813 22:11:34.074815 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.0.7.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1.axis
W0813 22:11:34.074910 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1.gamma
W0813 22:11:34.075014 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1.beta
W0813 22:11:34.075106 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1.moving_mean
W0813 22:11:34.075195 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1.moving_variance
W0813 22:11:34.075284 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.1.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4.axis
W0813 22:11:34.075379 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4.gamma
W0813 22:11:34.075463 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4.beta
W0813 22:11:34.075549 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4.moving_mean
W0813 22:11:34.075634 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4.moving_variance
W0813 22:11:34.075732 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.4.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7.axis
W0813 22:11:34.075866 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7.gamma
W0813 22:11:34.075967 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7.beta
W0813 22:11:34.076059 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7.moving_mean
W0813 22:11:34.076149 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7.moving_variance
W0813 22:11:34.076238 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.1.7.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1.axis
W0813 22:11:34.076355 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1.gamma
W0813 22:11:34.076444 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1.beta
W0813 22:11:34.076534 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1.moving_mean
W0813 22:11:34.076624 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1.moving_variance
W0813 22:11:34.076711 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.1.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4.axis
W0813 22:11:34.076847 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4.gamma
W0813 22:11:34.076957 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4.beta
W0813 22:11:34.077045 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4.moving_mean
W0813 22:11:34.077135 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4.moving_variance
W0813 22:11:34.077225 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.4.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7.axis
W0813 22:11:34.077315 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7.gamma
W0813 22:11:34.077404 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7.beta
W0813 22:11:34.077495 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7.moving_mean
W0813 22:11:34.077585 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7.moving_variance
W0813 22:11:34.077684 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.2.7.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1.axis
W0813 22:11:34.077786 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1.gamma
W0813 22:11:34.077880 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1.beta
W0813 22:11:34.077974 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1.moving_mean
W0813 22:11:34.078061 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1.moving_variance
W0813 22:11:34.078155 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.1.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4.axis
W0813 22:11:34.078241 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4.gamma
W0813 22:11:34.078323 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4.beta
W0813 22:11:34.078405 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4.moving_mean
W0813 22:11:34.078488 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4.moving_variance
W0813 22:11:34.078572 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.4.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7.axis
W0813 22:11:34.078655 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7.gamma
W0813 22:11:34.078738 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7.beta
W0813 22:11:34.078844 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7.moving_mean
W0813 22:11:34.078930 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7.moving_variance
W0813 22:11:34.079025 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.3.7.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1.axis
W0813 22:11:34.079109 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1.gamma
W0813 22:11:34.079192 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1.beta
W0813 22:11:34.079276 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1.moving_mean
W0813 22:11:34.079360 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1.moving_variance
W0813 22:11:34.079467 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.1.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4.axis
W0813 22:11:34.079573 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4.gamma
W0813 22:11:34.079660 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4.beta
W0813 22:11:34.079743 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4.moving_mean
W0813 22:11:34.079848 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4.moving_variance
W0813 22:11:34.079934 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.4.moving_variance
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7.axis
W0813 22:11:34.080030 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7.axis
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7.gamma
W0813 22:11:34.080113 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7.gamma
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7.beta
W0813 22:11:34.080196 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7.beta
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7.moving_mean
W0813 22:11:34.080278 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7.moving_mean
WARNING:tensorflow:Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7.moving_variance
W0813 22:11:34.080360 140006873859968 util.py:144] Unresolved object in checkpoint: (root).model._box_predictor._base_tower_layers_for_heads.class_predictions_with_background.4.7.moving_variance
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
W0813 22:11:34.080513 140006873859968 util.py:152] A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W0813 22:11:34.088242 140006873859968 dataset_builder.py:83] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
W0813 22:11:46.071012 140002819544832 optimizer_v2.py:1223] Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
W0813 22:11:59.871310 140002819544832 optimizer_v2.py:1223] Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
INFO:tensorflow:Step 100 per-step time 1.151s loss=1.925
I0813 22:14:13.740751 140006873859968 model_lib_v2.py:647] Step 100 per-step time 1.151s loss=1.925
INFO:tensorflow:Step 200 per-step time 1.177s loss=1.953
I0813 22:16:10.760205 140006873859968 model_lib_v2.py:647] Step 200 per-step time 1.177s loss=1.953
INFO:tensorflow:Step 300 per-step time 1.119s loss=1.937
I0813 22:18:09.129777 140006873859968 model_lib_v2.py:647] Step 300 per-step time 1.119s loss=1.937
INFO:tensorflow:Step 400 per-step time 1.216s loss=1.923
I0813 22:20:06.049866 140006873859968 model_lib_v2.py:647] Step 400 per-step time 1.216s loss=1.923
INFO:tensorflow:Step 500 per-step time 1.173s loss=1.898
I0813 22:22:02.484669 140006873859968 model_lib_v2.py:647] Step 500 per-step time 1.173s loss=1.898
INFO:tensorflow:Step 600 per-step time 1.200s loss=1.901
I0813 22:23:58.854464 140006873859968 model_lib_v2.py:647] Step 600 per-step time 1.200s loss=1.901
INFO:tensorflow:Step 700 per-step time 1.176s loss=1.708
I0813 22:25:54.957367 140006873859968 model_lib_v2.py:647] Step 700 per-step time 1.176s loss=1.708
INFO:tensorflow:Step 800 per-step time 1.249s loss=1.533
I0813 22:27:52.541394 140006873859968 model_lib_v2.py:647] Step 800 per-step time 1.249s loss=1.533
INFO:tensorflow:Step 900 per-step time 1.124s loss=1.276
I0813 22:29:48.546311 140006873859968 model_lib_v2.py:647] Step 900 per-step time 1.124s loss=1.276
INFO:tensorflow:Step 1000 per-step time 1.146s loss=1.135
I0813 22:31:45.137908 140006873859968 model_lib_v2.py:647] Step 1000 per-step time 1.146s loss=1.135
INFO:tensorflow:Step 1100 per-step time 1.197s loss=1.136
I0813 22:33:42.247239 140006873859968 model_lib_v2.py:647] Step 1100 per-step time 1.197s loss=1.136
INFO:tensorflow:Step 1200 per-step time 1.212s loss=1.083
I0813 22:35:38.017867 140006873859968 model_lib_v2.py:647] Step 1200 per-step time 1.212s loss=1.083
INFO:tensorflow:Step 1300 per-step time 1.110s loss=1.047
I0813 22:37:33.969920 140006873859968 model_lib_v2.py:647] Step 1300 per-step time 1.110s loss=1.047
INFO:tensorflow:Step 1400 per-step time 1.175s loss=0.891
I0813 22:39:30.753473 140006873859968 model_lib_v2.py:647] Step 1400 per-step time 1.175s loss=0.891
INFO:tensorflow:Step 1500 per-step time 1.184s loss=0.884
I0813 22:41:26.534734 140006873859968 model_lib_v2.py:647] Step 1500 per-step time 1.184s loss=0.884
INFO:tensorflow:Step 1600 per-step time 1.101s loss=0.673
I0813 22:43:22.033168 140006873859968 model_lib_v2.py:647] Step 1600 per-step time 1.101s loss=0.673
INFO:tensorflow:Step 1700 per-step time 1.136s loss=0.790
I0813 22:45:18.178511 140006873859968 model_lib_v2.py:647] Step 1700 per-step time 1.136s loss=0.790
INFO:tensorflow:Step 1800 per-step time 1.173s loss=0.775
I0813 22:47:14.656067 140006873859968 model_lib_v2.py:647] Step 1800 per-step time 1.173s loss=0.775
INFO:tensorflow:Step 1900 per-step time 1.103s loss=0.686
I0813 22:49:09.823190 140006873859968 model_lib_v2.py:647] Step 1900 per-step time 1.103s loss=0.686
INFO:tensorflow:Step 2000 per-step time 1.201s loss=0.600
I0813 22:51:05.672816 140006873859968 model_lib_v2.py:647] Step 2000 per-step time 1.201s loss=0.600
INFO:tensorflow:Step 2100 per-step time 1.155s loss=0.594
I0813 22:53:02.180810 140006873859968 model_lib_v2.py:647] Step 2100 per-step time 1.155s loss=0.594
INFO:tensorflow:Step 2200 per-step time 1.164s loss=0.580
I0813 22:54:57.429358 140006873859968 model_lib_v2.py:647] Step 2200 per-step time 1.164s loss=0.580
INFO:tensorflow:Step 2300 per-step time 1.111s loss=0.732
I0813 22:56:54.230188 140006873859968 model_lib_v2.py:647] Step 2300 per-step time 1.111s loss=0.732
INFO:tensorflow:Step 2400 per-step time 1.140s loss=0.752
I0813 22:58:49.962999 140006873859968 model_lib_v2.py:647] Step 2400 per-step time 1.140s loss=0.752
INFO:tensorflow:Step 2500 per-step time 1.178s loss=0.600
I0813 23:00:46.389621 140006873859968 model_lib_v2.py:647] Step 2500 per-step time 1.178s loss=0.600
INFO:tensorflow:Step 2600 per-step time 1.100s loss=0.541
I0813 23:02:42.204446 140006873859968 model_lib_v2.py:647] Step 2600 per-step time 1.100s loss=0.541
INFO:tensorflow:Step 2700 per-step time 1.163s loss=0.519
I0813 23:04:38.128599 140006873859968 model_lib_v2.py:647] Step 2700 per-step time 1.163s loss=0.519
INFO:tensorflow:Step 2800 per-step time 1.185s loss=0.615
I0813 23:06:34.657800 140006873859968 model_lib_v2.py:647] Step 2800 per-step time 1.185s loss=0.615
INFO:tensorflow:Step 2900 per-step time 1.154s loss=0.722
I0813 23:08:30.501943 140006873859968 model_lib_v2.py:647] Step 2900 per-step time 1.154s loss=0.722
INFO:tensorflow:Step 3000 per-step time 1.196s loss=0.763
I0813 23:10:26.648993 140006873859968 model_lib_v2.py:647] Step 3000 per-step time 1.196s loss=0.763
INFO:tensorflow:Step 3100 per-step time 1.205s loss=0.632
I0813 23:12:23.002859 140006873859968 model_lib_v2.py:647] Step 3100 per-step time 1.205s loss=0.632
INFO:tensorflow:Step 3200 per-step time 1.114s loss=0.716
I0813 23:14:18.279392 140006873859968 model_lib_v2.py:647] Step 3200 per-step time 1.114s loss=0.716
INFO:tensorflow:Step 3300 per-step time 1.146s loss=0.472
I0813 23:16:13.921082 140006873859968 model_lib_v2.py:647] Step 3300 per-step time 1.146s loss=0.472
INFO:tensorflow:Step 3400 per-step time 1.178s loss=0.459
I0813 23:18:09.493846 140006873859968 model_lib_v2.py:647] Step 3400 per-step time 1.178s loss=0.459
INFO:tensorflow:Step 3500 per-step time 1.166s loss=0.510
I0813 23:20:03.469440 140006873859968 model_lib_v2.py:647] Step 3500 per-step time 1.166s loss=0.510
INFO:tensorflow:Step 3600 per-step time 1.093s loss=0.737
I0813 23:21:58.773575 140006873859968 model_lib_v2.py:647] Step 3600 per-step time 1.093s loss=0.737
INFO:tensorflow:Step 3700 per-step time 1.138s loss=0.586
I0813 23:23:54.494221 140006873859968 model_lib_v2.py:647] Step 3700 per-step time 1.138s loss=0.586
INFO:tensorflow:Step 3800 per-step time 1.139s loss=0.589
I0813 23:25:50.030404 140006873859968 model_lib_v2.py:647] Step 3800 per-step time 1.139s loss=0.589
INFO:tensorflow:Step 3900 per-step time 1.135s loss=0.551
I0813 23:27:46.267330 140006873859968 model_lib_v2.py:647] Step 3900 per-step time 1.135s loss=0.551
INFO:tensorflow:Step 4000 per-step time 1.147s loss=0.463
I0813 23:29:40.953073 140006873859968 model_lib_v2.py:647] Step 4000 per-step time 1.147s loss=0.463
INFO:tensorflow:Step 4100 per-step time 1.147s loss=0.582
I0813 23:31:36.543130 140006873859968 model_lib_v2.py:647] Step 4100 per-step time 1.147s loss=0.582
INFO:tensorflow:Step 4200 per-step time 1.160s loss=0.534
I0813 23:33:31.355457 140006873859968 model_lib_v2.py:647] Step 4200 per-step time 1.160s loss=0.534
INFO:tensorflow:Step 4300 per-step time 1.184s loss=0.480
I0813 23:35:26.549800 140006873859968 model_lib_v2.py:647] Step 4300 per-step time 1.184s loss=0.480
INFO:tensorflow:Step 4400 per-step time 1.207s loss=0.492
I0813 23:37:21.722975 140006873859968 model_lib_v2.py:647] Step 4400 per-step time 1.207s loss=0.492
INFO:tensorflow:Step 4500 per-step time 1.183s loss=0.621
I0813 23:39:17.630219 140006873859968 model_lib_v2.py:647] Step 4500 per-step time 1.183s loss=0.621
INFO:tensorflow:Step 4600 per-step time 1.115s loss=0.440
I0813 23:41:12.699441 140006873859968 model_lib_v2.py:647] Step 4600 per-step time 1.115s loss=0.440
INFO:tensorflow:Step 4700 per-step time 1.138s loss=0.452
I0813 23:43:08.172145 140006873859968 model_lib_v2.py:647] Step 4700 per-step time 1.138s loss=0.452
INFO:tensorflow:Step 4800 per-step time 1.129s loss=0.418
I0813 23:45:03.389499 140006873859968 model_lib_v2.py:647] Step 4800 per-step time 1.129s loss=0.418
INFO:tensorflow:Step 4900 per-step time 1.163s loss=0.405
I0813 23:46:58.884697 140006873859968 model_lib_v2.py:647] Step 4900 per-step time 1.163s loss=0.405
INFO:tensorflow:Step 5000 per-step time 1.276s loss=0.338
I0813 23:48:54.550435 140006873859968 model_lib_v2.py:647] Step 5000 per-step time 1.276s loss=0.338
INFO:tensorflow:Step 5100 per-step time 1.096s loss=0.343
I0813 23:50:50.610792 140006873859968 model_lib_v2.py:647] Step 5100 per-step time 1.096s loss=0.343
INFO:tensorflow:Step 5200 per-step time 1.149s loss=0.460
I0813 23:52:45.490033 140006873859968 model_lib_v2.py:647] Step 5200 per-step time 1.149s loss=0.460
INFO:tensorflow:Step 5300 per-step time 1.163s loss=0.330
I0813 23:54:40.231120 140006873859968 model_lib_v2.py:647] Step 5300 per-step time 1.163s loss=0.330
INFO:tensorflow:Step 5400 per-step time 1.108s loss=0.397
I0813 23:56:35.278238 140006873859968 model_lib_v2.py:647] Step 5400 per-step time 1.108s loss=0.397
INFO:tensorflow:Step 5500 per-step time 1.117s loss=0.320
I0813 23:58:30.859297 140006873859968 model_lib_v2.py:647] Step 5500 per-step time 1.117s loss=0.320
INFO:tensorflow:Step 5600 per-step time 1.134s loss=0.355
I0814 00:00:26.087466 140006873859968 model_lib_v2.py:647] Step 5600 per-step time 1.134s loss=0.355
INFO:tensorflow:Step 5700 per-step time 1.160s loss=0.298
I0814 00:02:20.984805 140006873859968 model_lib_v2.py:647] Step 5700 per-step time 1.160s loss=0.298
INFO:tensorflow:Step 5800 per-step time 1.153s loss=0.306
I0814 00:04:16.244708 140006873859968 model_lib_v2.py:647] Step 5800 per-step time 1.153s loss=0.306
INFO:tensorflow:Step 5900 per-step time 1.112s loss=0.313
I0814 00:06:11.617875 140006873859968 model_lib_v2.py:647] Step 5900 per-step time 1.112s loss=0.313
INFO:tensorflow:Step 6000 per-step time 1.166s loss=0.375
I0814 00:08:07.396801 140006873859968 model_lib_v2.py:647] Step 6000 per-step time 1.166s loss=0.375
INFO:tensorflow:Step 6100 per-step time 1.126s loss=0.358
I0814 00:10:03.687170 140006873859968 model_lib_v2.py:647] Step 6100 per-step time 1.126s loss=0.358
INFO:tensorflow:Step 6200 per-step time 1.126s loss=0.379
I0814 00:11:58.237306 140006873859968 model_lib_v2.py:647] Step 6200 per-step time 1.126s loss=0.379
INFO:tensorflow:Step 6300 per-step time 1.290s loss=0.347
I0814 00:13:53.521604 140006873859968 model_lib_v2.py:647] Step 6300 per-step time 1.290s loss=0.347
INFO:tensorflow:Step 6400 per-step time 1.149s loss=0.392
I0814 00:15:48.953862 140006873859968 model_lib_v2.py:647] Step 6400 per-step time 1.149s loss=0.392
INFO:tensorflow:Step 6500 per-step time 1.149s loss=0.312
I0814 00:17:44.389721 140006873859968 model_lib_v2.py:647] Step 6500 per-step time 1.149s loss=0.312
INFO:tensorflow:Step 6600 per-step time 1.155s loss=0.300
I0814 00:19:39.369437 140006873859968 model_lib_v2.py:647] Step 6600 per-step time 1.155s loss=0.300
INFO:tensorflow:Step 6700 per-step time 1.120s loss=0.311
I0814 00:21:34.940807 140006873859968 model_lib_v2.py:647] Step 6700 per-step time 1.120s loss=0.311
INFO:tensorflow:Step 6800 per-step time 1.178s loss=0.337
I0814 00:23:30.308483 140006873859968 model_lib_v2.py:647] Step 6800 per-step time 1.178s loss=0.337
INFO:tensorflow:Step 6900 per-step time 1.131s loss=0.410
I0814 00:25:25.358650 140006873859968 model_lib_v2.py:647] Step 6900 per-step time 1.131s loss=0.410
INFO:tensorflow:Step 7000 per-step time 1.129s loss=0.374
I0814 00:27:20.194301 140006873859968 model_lib_v2.py:647] Step 7000 per-step time 1.129s loss=0.374
INFO:tensorflow:Step 7100 per-step time 1.143s loss=0.380
I0814 00:29:15.444207 140006873859968 model_lib_v2.py:647] Step 7100 per-step time 1.143s loss=0.380
INFO:tensorflow:Step 7200 per-step time 1.121s loss=0.260
I0814 00:31:11.221878 140006873859968 model_lib_v2.py:647] Step 7200 per-step time 1.121s loss=0.260
INFO:tensorflow:Step 7300 per-step time 1.138s loss=0.560
I0814 00:33:06.106604 140006873859968 model_lib_v2.py:647] Step 7300 per-step time 1.138s loss=0.560
INFO:tensorflow:Step 7400 per-step time 1.203s loss=0.298
I0814 00:35:00.768404 140006873859968 model_lib_v2.py:647] Step 7400 per-step time 1.203s loss=0.298
INFO:tensorflow:Step 7500 per-step time 1.205s loss=0.283
I0814 00:36:56.007971 140006873859968 model_lib_v2.py:647] Step 7500 per-step time 1.205s loss=0.283
INFO:tensorflow:Step 7600 per-step time 1.103s loss=0.356
I0814 00:38:51.820330 140006873859968 model_lib_v2.py:647] Step 7600 per-step time 1.103s loss=0.356
INFO:tensorflow:Step 7700 per-step time 1.088s loss=0.342
I0814 00:40:46.510128 140006873859968 model_lib_v2.py:647] Step 7700 per-step time 1.088s loss=0.342
INFO:tensorflow:Step 7800 per-step time 1.088s loss=0.335
I0814 00:42:41.298324 140006873859968 model_lib_v2.py:647] Step 7800 per-step time 1.088s loss=0.335
INFO:tensorflow:Step 7900 per-step time 1.259s loss=0.282
I0814 00:44:37.463476 140006873859968 model_lib_v2.py:647] Step 7900 per-step time 1.259s loss=0.282
INFO:tensorflow:Step 8000 per-step time 1.119s loss=0.493
I0814 00:46:32.917315 140006873859968 model_lib_v2.py:647] Step 8000 per-step time 1.119s loss=0.493
INFO:tensorflow:Step 8100 per-step time 1.160s loss=0.389
I0814 00:48:28.727357 140006873859968 model_lib_v2.py:647] Step 8100 per-step time 1.160s loss=0.389
INFO:tensorflow:Step 8200 per-step time 1.160s loss=0.257
I0814 00:50:24.840504 140006873859968 model_lib_v2.py:647] Step 8200 per-step time 1.160s loss=0.257
INFO:tensorflow:Step 8300 per-step time 1.177s loss=0.399
I0814 00:52:19.689270 140006873859968 model_lib_v2.py:647] Step 8300 per-step time 1.177s loss=0.399
INFO:tensorflow:Step 8400 per-step time 1.137s loss=0.425
I0814 00:54:15.287885 140006873859968 model_lib_v2.py:647] Step 8400 per-step time 1.137s loss=0.425
INFO:tensorflow:Step 8500 per-step time 1.154s loss=0.261
I0814 00:56:10.641874 140006873859968 model_lib_v2.py:647] Step 8500 per-step time 1.154s loss=0.261
INFO:tensorflow:Step 8600 per-step time 1.133s loss=0.283
I0814 00:58:06.035442 140006873859968 model_lib_v2.py:647] Step 8600 per-step time 1.133s loss=0.283
INFO:tensorflow:Step 8700 per-step time 1.100s loss=0.360
I0814 01:00:01.960647 140006873859968 model_lib_v2.py:647] Step 8700 per-step time 1.100s loss=0.360
INFO:tensorflow:Step 8800 per-step time 1.139s loss=0.354
I0814 01:01:56.986142 140006873859968 model_lib_v2.py:647] Step 8800 per-step time 1.139s loss=0.354
INFO:tensorflow:Step 8900 per-step time 1.128s loss=0.382
I0814 01:03:51.603060 140006873859968 model_lib_v2.py:647] Step 8900 per-step time 1.128s loss=0.382
INFO:tensorflow:Step 9000 per-step time 1.226s loss=0.303
I0814 01:05:47.060079 140006873859968 model_lib_v2.py:647] Step 9000 per-step time 1.226s loss=0.303
INFO:tensorflow:Step 9100 per-step time 1.113s loss=0.281
I0814 01:07:42.896534 140006873859968 model_lib_v2.py:647] Step 9100 per-step time 1.113s loss=0.281
INFO:tensorflow:Step 9200 per-step time 1.155s loss=0.410
I0814 01:09:38.654875 140006873859968 model_lib_v2.py:647] Step 9200 per-step time 1.155s loss=0.410
INFO:tensorflow:Step 9300 per-step time 1.136s loss=0.283
I0814 01:11:34.246269 140006873859968 model_lib_v2.py:647] Step 9300 per-step time 1.136s loss=0.283
INFO:tensorflow:Step 9400 per-step time 1.194s loss=0.314
I0814 01:13:29.563839 140006873859968 model_lib_v2.py:647] Step 9400 per-step time 1.194s loss=0.314
INFO:tensorflow:Step 9500 per-step time 1.159s loss=0.291
I0814 01:15:24.543631 140006873859968 model_lib_v2.py:647] Step 9500 per-step time 1.159s loss=0.291
INFO:tensorflow:Step 9600 per-step time 1.108s loss=0.309
I0814 01:17:20.563082 140006873859968 model_lib_v2.py:647] Step 9600 per-step time 1.108s loss=0.309
INFO:tensorflow:Step 9700 per-step time 1.138s loss=0.292
I0814 01:19:16.940903 140006873859968 model_lib_v2.py:647] Step 9700 per-step time 1.138s loss=0.292
INFO:tensorflow:Step 9800 per-step time 1.149s loss=0.356
I0814 01:21:13.006796 140006873859968 model_lib_v2.py:647] Step 9800 per-step time 1.149s loss=0.356
INFO:tensorflow:Step 9900 per-step time 1.159s loss=0.253
I0814 01:23:08.883814 140006873859968 model_lib_v2.py:647] Step 9900 per-step time 1.159s loss=0.253
INFO:tensorflow:Step 10000 per-step time 1.158s loss=0.267
I0814 01:25:04.436773 140006873859968 model_lib_v2.py:647] Step 10000 per-step time 1.158s loss=0.267
In [ ]:
#run model evaluation to obtain performance metrics
#!python ./models/research/object_detection/model_main_tf2.py \
    #--pipeline_config_path={pipeline_file} \
    #--model_dir={model_dir} \
    #--checkpoint_dir={model_dir} \
#Not yet implemented for EfficientDet

Visualize loss, accuracy and learning rate on Tensorboard

In [ ]:
%load_ext tensorboard
%tensorboard --logdir './training/train'

Save our trained model

Exporting a Trained Inference Graph

In [ ]:
#see where our model saved weights
%ls './training/'
checkpoint                   ckpt-6.index
ckpt-10.data-00000-of-00002  ckpt-7.data-00000-of-00002
ckpt-10.data-00001-of-00002  ckpt-7.data-00001-of-00002
ckpt-10.index                ckpt-7.index
ckpt-11.data-00000-of-00002  ckpt-8.data-00000-of-00002
ckpt-11.data-00001-of-00002  ckpt-8.data-00001-of-00002
ckpt-11.index                ckpt-8.index
ckpt-5.data-00000-of-00002   ckpt-9.data-00000-of-00002
ckpt-5.data-00001-of-00002   ckpt-9.data-00001-of-00002
ckpt-5.index                 ckpt-9.index
ckpt-6.data-00000-of-00002   train/
ckpt-6.data-00001-of-00002

Important Note: Before executing the next cell, navigate to usr/local/python3.6/dist-packages/tensorflow/python/keras/utils/tf_utils.py. Double click and open the file.

Note: The above path might differ depending on your environment and setup. If you still get error after executing the below cell, read the error and make sure you are modifying the tf_utils.py at its correct location path.

We will modify the python script at Line 140 by replacing raise TypeError('Expected Operation, Variable, or Tensor, got ' + str(x)) with this:

if not isinstance(x, str): raise TypeError('Expected Operation, Variable, or Tensor, got ' + str(x))

Make sure to take care of indentation while pasting!

In [ ]:
#run conversion script
import re
import numpy as np

output_directory = './fine_tuned_model'

#place the model weights you would like to export here
last_model_path = './training/'
print(last_model_path)
!python ./models/research/object_detection/exporter_main_v2.py \
    --trained_checkpoint_dir {last_model_path} \
    --output_directory {output_directory} \
    --pipeline_config_path {pipeline_file}
./training/
I0814 01:33:15.478254 139641864025984 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet EfficientNet backbone version: efficientnet-b0
I0814 01:33:15.478458 139641864025984 ssd_efficientnet_bifpn_feature_extractor.py:145] EfficientDet BiFPN num filters: 64
I0814 01:33:15.478537 139641864025984 ssd_efficientnet_bifpn_feature_extractor.py:147] EfficientDet BiFPN num iterations: 3
I0814 01:33:15.487169 139641864025984 efficientnet_model.py:146] round_filter input=32 output=32
2020-08-14 01:33:15.494914: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-08-14 01:33:15.528718: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-14 01:33:15.529280: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.73GiB deviceMemoryBandwidth: 298.08GiB/s
2020-08-14 01:33:15.529520: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-08-14 01:33:15.531407: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-08-14 01:33:15.533118: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-08-14 01:33:15.533446: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-08-14 01:33:15.535394: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-08-14 01:33:15.539984: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-08-14 01:33:15.544695: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-08-14 01:33:15.544947: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-14 01:33:15.545858: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-14 01:33:15.546387: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-08-14 01:33:15.546660: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-08-14 01:33:15.552315: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2200000000 Hz
2020-08-14 01:33:15.552485: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1556bc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-08-14 01:33:15.552507: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-08-14 01:33:15.656205: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-14 01:33:15.656852: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1556a00 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-08-14 01:33:15.656886: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Tesla T4, Compute Capability 7.5
2020-08-14 01:33:15.657068: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-14 01:33:15.657642: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.73GiB deviceMemoryBandwidth: 298.08GiB/s
2020-08-14 01:33:15.657698: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-08-14 01:33:15.657722: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-08-14 01:33:15.657743: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-08-14 01:33:15.657780: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-08-14 01:33:15.657803: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-08-14 01:33:15.657821: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-08-14 01:33:15.657840: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-08-14 01:33:15.657911: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-14 01:33:15.658432: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-14 01:33:15.658923: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-08-14 01:33:15.658985: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-08-14 01:33:15.660212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-08-14 01:33:15.660240: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0 
2020-08-14 01:33:15.660252: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N 
2020-08-14 01:33:15.660387: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-14 01:33:15.660967: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-08-14 01:33:15.661517: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2020-08-14 01:33:15.661557: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14071 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)
I0814 01:33:16.341439 139641864025984 efficientnet_model.py:146] round_filter input=32 output=32
I0814 01:33:16.341610 139641864025984 efficientnet_model.py:146] round_filter input=16 output=16
I0814 01:33:16.446703 139641864025984 efficientnet_model.py:146] round_filter input=16 output=16
I0814 01:33:16.446875 139641864025984 efficientnet_model.py:146] round_filter input=24 output=24
I0814 01:33:16.733729 139641864025984 efficientnet_model.py:146] round_filter input=24 output=24
I0814 01:33:16.733899 139641864025984 efficientnet_model.py:146] round_filter input=40 output=40
I0814 01:33:17.018462 139641864025984 efficientnet_model.py:146] round_filter input=40 output=40
I0814 01:33:17.018628 139641864025984 efficientnet_model.py:146] round_filter input=80 output=80
I0814 01:33:17.543940 139641864025984 efficientnet_model.py:146] round_filter input=80 output=80
I0814 01:33:17.544115 139641864025984 efficientnet_model.py:146] round_filter input=112 output=112
I0814 01:33:17.982514 139641864025984 efficientnet_model.py:146] round_filter input=112 output=112
I0814 01:33:17.982705 139641864025984 efficientnet_model.py:146] round_filter input=192 output=192
I0814 01:33:18.567280 139641864025984 efficientnet_model.py:146] round_filter input=192 output=192
I0814 01:33:18.567503 139641864025984 efficientnet_model.py:146] round_filter input=320 output=320
I0814 01:33:18.702652 139641864025984 efficientnet_model.py:146] round_filter input=1280 output=1280
I0814 01:33:18.758325 139641864025984 efficientnet_model.py:459] Building model efficientnet with params ModelConfig(width_coefficient=1.0, depth_coefficient=1.0, resolution=224, dropout_rate=0.2, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
WARNING:tensorflow:Skipping full serialization of Keras layer <object_detection.meta_architectures.ssd_meta_arch.SSDMetaArch object at 0x7f000a201390>, because it is not built.
W0814 01:33:44.451916 139641864025984 save_impl.py:76] Skipping full serialization of Keras layer <object_detection.meta_architectures.ssd_meta_arch.SSDMetaArch object at 0x7f000a201390>, because it is not built.
2020-08-14 01:34:10.178320: W tensorflow/python/util/util.cc:329] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c978>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c9e8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336ccf8>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:29.790403 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c978>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c9e8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336ccf8>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef0b8>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef128>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef438>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:29.790729 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef0b8>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef128>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef438>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:29.790952 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:29.791129 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c978>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c9e8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336ccf8>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:34.001239 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c978>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c9e8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336ccf8>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef0b8>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef128>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef438>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:34.001546 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef0b8>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef128>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef438>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:34.001743 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:34.001930 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:34.002109 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:34.002260 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c978>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c9e8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336ccf8>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:36.422804 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c978>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c9e8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336ccf8>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef0b8>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef128>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef438>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:36.423171 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef0b8>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef128>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef438>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:36.423408 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:36.423598 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c978>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c9e8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336ccf8>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:36.755296 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c978>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c9e8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336ccf8>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef0b8>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef128>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef438>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:36.755590 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef0b8>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef128>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef438>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:36.755808 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:36.755978 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:36.756160 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:36.756322 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
W0814 01:34:37.471813 139641864025984 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c978>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c9e8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336ccf8>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:44.832524 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c978>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336c9e8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff336ccf8>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef0b8>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef128>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef438>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:44.832841 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef0b8>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef128>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff32ef438>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
I0814 01:34:44.833236 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff39018d0>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901898>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff3901518>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], True), {}).
INFO:tensorflow:Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
I0814 01:34:44.833434 139641864025984 def_function.py:830] Unsupported signature for serialization: (([(<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe48>, TensorSpec(shape=(None, 64, 64, 40), dtype=tf.float32, name='feature_pyramid/0/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbeb8>, TensorSpec(shape=(None, 32, 32, 112), dtype=tf.float32, name='feature_pyramid/1/1')), (<tensorflow.python.framework.func_graph.UnknownArgument object at 0x7efff35fbe80>, TensorSpec(shape=(None, 16, 16, 320), dtype=tf.float32, name='feature_pyramid/2/1'))], False), {}).
INFO:tensorflow:Assets written to: ./fine_tuned_model/saved_model/assets
I0814 01:34:46.018794 139641864025984 builder_impl.py:775] Assets written to: ./fine_tuned_model/saved_model/assets
INFO:tensorflow:Writing pipeline config file to ./fine_tuned_model/pipeline.config
I0814 01:34:47.705433 139641864025984 config_util.py:254] Writing pipeline config file to ./fine_tuned_model/pipeline.config

If everything worked fine, the output of the above cell will show something like: Writing pipeline config file to ...!

It means your model is saved and you can check it in path "./fine_tuned_model"

In [ ]:
%ls './fine_tuned_model/saved_model/'
assets/  saved_model.pb  variables/

Run Inference on Test Images with our Trained Smoke Detector

We are downloading the test data from Roboflow for inference

In [ ]:
#downloading test images from Roboflow
#export dataset above with format COCO JSON
#or import your test images via other means. 
%mkdir ./test/
%cd ./test/
!curl -L "https://app.roboflow.ai/ds/lzluJXV2ee?key=9QBlNZuk3r" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip
mkdir: cannot create directory ‘./test/’: File exists
./test
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   887  100   887    0     0    975      0 --:--:-- --:--:-- --:--:--   974
100 27.6M  100 27.6M    0     0  18.9M      0  0:00:01  0:00:01 --:--:-- 18.9M
Archive:  roboflow.zip
 extracting: test/ck0kl079lk71f0a46m6yo4q9m_jpeg.rf.0483e9a50ac12d00341fd4b75a0fa75e.jpg  
 extracting: test/ck0rp4r7382md0721515r9ll0_jpeg.rf.13ad39682b82c438ae4481d41eeec5bd.jpg  
 extracting: test/ck0kl1k8d9ks80701twyaq08t_jpeg.rf.2d1cb7bd063f1848582ac809383894f7.jpg  
 extracting: test/ck0u98uysslwa0794bmd4n7fd_jpeg.rf.18be3cbd848a3120b943d9c21c1fe93f.jpg  
 extracting: test/ck0lx4sqkjw920721qbehquxz_jpeg.rf.4aec9f79569c08f0ff3f87d1122c4e05.jpg  
 extracting: test/ck0kmftzekfwv0a46aj0nh5dz_jpeg.rf.4ffa9090f9a4c1733513924e2b759014.jpg  
 extracting: test/ck0tz4dsmshgh08637pfl2z1c_jpeg.rf.0870946d55cf935c17b816f26626e001.jpg  
 extracting: test/ck0rr6kut5kx30838ethvdv5u_jpeg.rf.2865207837571d00886da940ee70df8c.jpg  
 extracting: test/ck0t2zw8elp2f0838ve84hm51_jpeg.rf.1dcf5a6d17d61fce6524e6ec8c503daf.jpg  
 extracting: test/ck0km3uktkdxc0a46uf6e417v_jpeg.rf.2fc75db1cbba9da225afe4583c002716.jpg  
 extracting: test/ck0rr05k797je0721hrz4bw9e_jpeg.rf.065e2e74d323df445b446e92d896e121.jpg  
 extracting: test/ck0na5s461w2n0794ifd5c5cs_jpeg.rf.4a6f05a9e91460629adee99d8e20393d.jpg  
 extracting: test/ck0tzmo81wtq50721yer41mgt_jpeg.rf.276c6510070d4b9d68b75821149033ec.jpg  
 extracting: test/ck0rpimt3ifig0848hijlm44t_jpeg.rf.0ec63fdcb66b24192f9b22e9c099f350.jpg  
 extracting: test/ck0lyh3mpi04c094450wwwb9g_jpeg.rf.04ee66935b32c7af8bb098a7c3a1b551.jpg  
 extracting: test/ck0ngmfoyjka50a46cagrxpig_jpeg.rf.44a7e526a11e812b0fa001aa7ecf1224.jpg  
 extracting: test/ck0kkg0u65u3q0863z6w2psqp_jpeg.rf.29f3de3c9edd525793248683330cac09.jpg  
 extracting: test/ck0keq7xa4xxd0863otmigtsy_jpeg.rf.37ff94262cd7a0503169cd8ef3cfe50e.jpg  
 extracting: test/ck0tznjlaumey0701fbjp1dbu_jpeg.rf.445a561a7c735446568bc43f715a2157.jpg  
 extracting: test/ck0ukig53tfv30794pt4u6ude_jpeg.rf.48bef66940d16ea5b77903d57cea9398.jpg  
 extracting: test/ck0ndxwjf3icx0794m3hlih4u_jpeg.rf.1e43b54e18bdf2fcfd542ce216e10813.jpg  
 extracting: test/ck0km4x1h672j0794gya4kfco_jpeg.rf.0de452014c32d8ca2c6100e6af09278b.jpg  
 extracting: test/ck0lxeugahhve09449l5anbuj_jpeg.rf.2c9bef0da4ae00e86f963ca74296abd2.jpg  
 extracting: test/ck0kl5uir7ej708386nnh95ra_jpeg.rf.582e32a7465bdd58e1b14fb41f66e813.jpg  
 extracting: test/ck0kn6a6093460944dnwl19so_jpeg.rf.54a78e5465c5d0d2cc40f07ac23e67dc.jpg  
 extracting: test/ck0rpjxds42tv0863ke8nrsmn_jpeg.rf.58992022f3d817a556efe6d7d26fbdec.jpg  
 extracting: test/ck0kdhizwj37t0a461n7bsvun_jpeg.rf.6ba07abd6eb312ecb48cc1f64e3761de.jpg  
 extracting: test/ck0kkhi2lb1b30721f7ltwn14_jpeg.rf.6887349a6f76d1e8ae7d27738e2b0d55.jpg  
 extracting: test/ck0t592vckr6p07949h19oxce_jpeg.rf.6e2ec452feccc87ce6c76715258bfd5c.jpg  
 extracting: test/ck0lxh38zu5ri0848ra81j7ne_jpeg.rf.5eed4033d560dfeaf6321bf75e34db1c.jpg  
 extracting: test/ck0ndcirh69sn0944tmwamh75_jpeg.rf.2107fe99657578bc43e16a6f870d7d17.jpg  
 extracting: test/ck0qb78lnsvjg0a46r1s6hxr1_jpeg.rf.804326bbb5c28fa2a3e2319c6d38927d.jpg  
 extracting: test/ck0kcljocj0qo0a46rzj3y211_jpeg.rf.855cc5b0c397f83ec8849af4bbbda889.jpg  
 extracting: test/ck0kcpbi29y0x0721g7dlzzcs_jpeg.rf.860349f5f2bd4853db1c08ed98f5c636.jpg  
 extracting: test/ck0rp7n2684q30721hrzu5r15_jpeg.rf.753a0b16f46efc5c5fbe9f6f2dc0755e.jpg  
 extracting: test/ck0t4mzl8ljds0863b0iswyfc_jpeg.rf.867ccbdfdb4338c9457fc232be46d82a.jpg  
 extracting: test/ck0keqofqkeab0848eq712ch1_jpeg.rf.7bd868f64a955ed69966734927b1d06f.jpg  
 extracting: test/ck0kepbs9kdym0848hgpcf3y9_jpeg.rf.9745c22d13a697cf5147c74c97bb04e4.jpg  
 extracting: test/ck0lxhbtdewio0794g9qrp7o9_jpeg.rf.7b57d84fc4caa347e18b3ab4f8df44ad.jpg  
 extracting: test/ck0rql046icek0a462kapdu97_jpeg.rf.7eeef505425e37834eaf6cbdd26246d8.jpg  
 extracting: test/ck0qd5yquiaiq07016q73qoz3_jpeg.rf.6e57dabe840f25508735c73fed2cbae7.jpg  
 extracting: test/ck0kkywt08q2d0944abjjfi8q_jpeg.rf.a27234d3482cf812288ec37dedcca801.jpg  
 extracting: test/ck0kkthj17c9g0838c3vjfndl_jpeg.rf.a7c1f468f1037a74aac2170d60f30f21.jpg  
 extracting: test/ck0twzs19s8ld08632qx4gg12_jpeg.rf.90164a66653f248de9d4c915ddc3a73d.jpg  
 extracting: test/ck0t421swni18070115tymsht_jpeg.rf.66a07a20c5f4c857506998fa17faa847.jpg  
 extracting: test/ck0nfnbw45srt083889319uj8_jpeg.rf.8fa1701c2eeabf221eddce82377e11cf.jpg  
 extracting: test/ck0rqf6p055aw0838g7rq1u9v_jpeg.rf.bccbb6103770734f2f96cf0d20bae2c9.jpg  
 extracting: test/ck0qb7h0asvnf0a46l0myka01_jpeg.rf.b1ba3b3e56ad0dbb2c129b0af342f0fd.jpg  
 extracting: test/ck0nfuci585lm0701jw9i5zhj_jpeg.rf.bf15e9a0754fac9b254dfd49cd7fe46e.jpg  
 extracting: test/ck0ng3jtr5k7j08637da4685p_jpeg.rf.b915a960b09060b8b0fe71362909c254.jpg  
 extracting: test/ck0kcko719xjn0721pujcl4if_jpeg.rf.b245720448c79a79f2746096d2422221.jpg  
 extracting: test/ck0lyfhltum8h0848fs747twi_jpeg.rf.9aac1438e1c55e1720686f6764ff93c0.jpg  
 extracting: test/ck0t2fzvhlj200838b26ckwda_jpeg.rf.bd7fab4a360cb871eeda93ecde2bb799.jpg  
 extracting: test/ck0tzzys2sm6b0863k8qyy7hp_jpeg.rf.c6773924f183d9abe8647e3e1821bb56.jpg  
 extracting: test/ck0twzjvps8kb0863yht85hg8_jpeg.rf.a96c663744d2809af3ce0c31dd026e1d.jpg  
 extracting: test/ck0kd86qe8gf507018qw8usfe_jpeg.rf.e33e2fbaf0c6b26fd463066ce3ac9795.jpg  
 extracting: test/ck0km4hlk63zp0863ur0owhkg_jpeg.rf.a5a5faabd821586f034bcdaeba02661e.jpg  
 extracting: test/ck0m1nvd7hi5g0794ixq4uhub_jpeg.rf.d197a3420649f83c74f33e0ed4afffcf.jpg  
 extracting: test/ck0kow7lga3y30701f2yvt2b2_jpeg.rf.e4f6df0d3016b292a1a7651e8a076721.jpg  
 extracting: test/ck0qbwggghon207016zt7rz5a_jpeg.rf.d86639994116044cb058c51e9e312a5f.jpg  
 extracting: test/ck0ncye0iiouc0848ostst7bq_jpeg.rf.dbd9c6da6bad433fa8255033bf26dd63.jpg  
 extracting: test/ck0t1qg3kyft90a46nddriu6k_jpeg.rf.e7c1a2c207460801cd2d14749d056030.jpg  
 extracting: test/ck0ndoj9r8u8w07216trgrwuz_jpeg.rf.e8e635328f2d8e6e1968f7d0339fbf4e.jpg  
 extracting: test/ck0tswxwqqmau0794imifdgnh_jpeg.rf.eeeb8e8f4c469c266caaf1a8757a578e.jpg  
 extracting: test/ck0txn3y1uaec0944bo6m4q6c_jpeg.rf.e3d72a96b16e994509e4aa0d729ee804.jpg  
 extracting: test/ck0u09p0z70h60848av5vdh7v_jpeg.rf.e860b6f790820febff5de89ce55a984c.jpg  
 extracting: test/ck0khsg8rangi07215h1hso72_jpeg.rf.f5a9c96fd7519fdf062a401ed8128c5c.jpg  
 extracting: test/ck0kmgevm69ac0794mihosd3p_jpeg.rf.ef6ab047e0d8e7fe5e25de0b52f0d6fc.jpg  
 extracting: test/ck0t3zlhdz6180a46kq06kflk_jpeg.rf.ffb55e97af35a69bc6d18e8589e34c2f.jpg  
 extracting: test/ck0ujzue9uicd08638rcyl1ro_jpeg.rf.ec3d9592c5eb8919a21bb039c6cf0ec6.jpg  
 extracting: train/ck0kdra5c7mjh0944w97n0exh_jpeg.rf.00985fbdbfac1b2841364e6357223ce0.jpg  
 extracting: test/ck0ncwdaxhgtf0a46wezybiq2_jpeg.rf.da7da461e048f4cf81ca7f8eccee1f9c.jpg  
 extracting: train/ck0negbgs3uhi0794nkdvb6zi_jpeg.rf.00ca6b9b7ad56f0c8532f9392313c5d2.jpg  
 extracting: train/ck0rqfvfjizmt0848qs0rsd4l_jpeg.rf.0052400a2af2215e75d1e0299909fb63.jpg  
 extracting: train/ck0tx7k0tr5rz0794r2twuz1o_jpeg.rf.003e93ffeba7671566f15b13faf3fd0f.jpg  
 extracting: test/ck0u0muhduoik0944fwbze43u_jpeg.rf.eda9462d39225ee0c3470fb6eb726955.jpg  
 extracting: test/ck0rqjl29ibr20a46351u8ys5_jpeg.rf.e419c85c6d6e8c252bb20a5279e5be9a.jpg  
 extracting: train/ck0twhqawsr970838xj2topr5_jpeg.rf.045fdb501219116623fe6713955c3b8a.jpg  
 extracting: train/ck0tylooksf3f0863s6tbwxhx_jpeg.rf.056e8171d1b2205e2ba753d17b987e73.jpg  
 extracting: train/ck0owog84vf1d0701sbf84fw6_jpeg.rf.01fa4c32fe1f9dac9c7899c7680f1f6e.jpg  
 extracting: train/ck0uiwjhb843y0a46w2o985sl_jpeg.rf.0296ec752f6212b942dcdc060e61af82.jpg  
 extracting: train/ck0rqdyeq4kxa08633xv2wqe1_jpeg.rf.061f4b1d6ac5dd1a3e33f59351e99099.jpg  
 extracting: train/ck0tsz3b0624008481fmfbt28_jpeg.rf.05f4e2e58727ce4a0871edc417b8d7f1.jpg  
 extracting: train/ck0qd7cyiib8f0701npyn63rc_jpeg.rf.05f37b4b59053aaed23ab9637f6be568.jpg  
 extracting: train/ck0qbnoibg4ip0838na1tnqe8_jpeg.rf.0658d620a3996d955173241a05fc4240.jpg  
 extracting: train/ck0qbolrmhccn0944s970z7xn_jpeg.rf.0686c8c77898eed54e1b113f233922e3.jpg  
 extracting: train/ck0t3iu3flv0k0838lql92x5b_jpeg.rf.064fa8544e6c80946808e96161466399.jpg  
 extracting: train/ck0lx8c34u38a08480v1hz7dz_jpeg.rf.067d7f7377875b3650387abe76de0229.jpg  
 extracting: train/ck0twef0nu7ej0701ku08izg4_jpeg.rf.076a3ddc56ed2b0d662232164f81bd21.jpg  
 extracting: train/ck0uki6ppytkz0721bttibej8_jpeg.rf.0a0cea88ca0661e0d8c089d725aabb50.jpg  
 extracting: train/ck0lxfa6uepfr0863exlzozuc_jpeg.rf.06bdccae42268d9f91b2ba21fe85a401.jpg  
 extracting: train/ck0t4sh7pnk3v09447wbxifkk_jpeg.rf.0b629c6baa98dc18c548a008b3795ed5.jpg  
 extracting: train/ck0kn8y5y9y3507018k5p06ma_jpeg.rf.0adf812ae581904f3f069f1e1e7afab7.jpg  
 extracting: train/ck0qbu8qqenrd0794vafe93ht_jpeg.rf.0dcf3273c95362d972c3236493025441.jpg  
 extracting: train/ck0l8askp96bh0794bppil14t_jpeg.rf.14671b11836b544da6f4503081a9afb0.jpg  
 extracting: train/ck0nd794g00mz078022z17djb_jpeg.rf.0b651ad0446623dc29600cb991f1ca26.jpg  
 extracting: train/ck0ovtzvntedo08385kjpyx6a_jpeg.rf.097b4b81b610c081369ac7993c964ac5.jpg  
 extracting: train/ck0m0f8x4hwsq0838adn9hkti_jpeg.rf.133742e13d4baba91baa9a3d48158b5c.jpg  
 extracting: train/ck0lx5e5rhf4f0944ldgqzhf1_jpeg.rf.1290e838cf3af2d5d49aa8493eb00165.jpg  
 extracting: train/ck0ow7vs07tuz08485n4yz5si_jpeg.rf.0fcde3676172f9833ace20275a779e20.jpg  
 extracting: train/ck0nepj3s015b0a88py3wvbqu_jpeg.rf.12294630d8d186eb73387113265f9d3e.jpg  
 extracting: train/ck0txvj9m6otr0848fayj9tte_jpeg.rf.14aed75ed8560617eeef5504bd3cc212.jpg  
 extracting: train/ck0rqlgco58j50838724x6rym_jpeg.rf.12af18b62f771f5320d3d93528fe3cd6.jpg  
 extracting: train/ck0u9e8u77f9y0a46i2qc7hje_jpeg.rf.08b27b6ac0c8e9a125014256ea7d8ddb.jpg  
 extracting: train/ck0kcsfho4r010863q5sjbub6_jpeg.rf.060ede57a5da8274f92bf01e4dded5cb.jpg  
 extracting: train/ck0kcouioj11m0a46yjyp0zlc_jpeg.rf.0ec3b08a383aadf9d7aeebacc2c83a05.jpg  
 extracting: train/ck0km5c8vke4q0a46trc774lh_jpeg.rf.14c5dce2d75cc0f38bbb79e61e5ef614.jpg  
 extracting: train/ck0qd5it1i1wl0944fqp8vf9c_jpeg.rf.0d9d19745ac349d6cf6a4c9d6cc5abaf.jpg  
 extracting: train/ck0m1643sidi5083850vpmnkb_jpeg.rf.17b49c77686c9a18938839625778da93.jpg  
 extracting: train/ck0l9xgch9grx0794i0878xu3_jpeg.rf.165500c4dd50b75cc0cf96025038fff5.jpg  
 extracting: train/ck0kht0vs5k2r079428olg3jt_jpeg.rf.17102fd89a5f6d7b7b362cd5d4f83a07.jpg  
 extracting: train/ck0tyfppk6qqc0848u6xtpzgx_jpeg.rf.1467491ad5645dabf3b12e3c6be6a064.jpg  
 extracting: train/ck0txdhj9wk390721me7hmajn_jpeg.rf.1852696c217c342bd247ef39fe39b22a.jpg  
 extracting: train/ck0nfq2hsj1ro0a46o4gncg6r_jpeg.rf.16661820f1b0d0cde277985f4666826b.jpg  
 extracting: train/ck0uj9sgjyq2b0721813otzrx_jpeg.rf.18fc85e6de1b1a8e0c443505a16a4079.jpg  
 extracting: train/ck0uk5cl3877p0a468qm2bfay_jpeg.rf.1a47c23f69c11dfdcfc2bf849905afa9.jpg  
 extracting: train/ck0klds7p9myv07018b68zcsu_jpeg.rf.180b5a09252954e0a43eaad68b168c01.jpg  
 extracting: train/ck0twxu2f5xaj0a4648ury7hi_jpeg.rf.19451b9d1c328d8633fcad721c9dfcfa.jpg  
 extracting: train/ck0negpok9ao50721aw0ozvpl_jpeg.rf.19421bb30be2447eb7680184491abb1b.jpg  
 extracting: train/ck0t765i5mk8j0863od0a5ghy_jpeg.rf.1a92472df6a32701dea063ebc561c7c3.jpg  
 extracting: train/ck0u00a7o6ask0a467afyogi5_jpeg.rf.1c4b6d6cf5a3033820f859ea2471e847.jpg  
 extracting: train/ck0kl0ifc5xd60863hxynsup8_jpeg.rf.1c344629a76cddfc1ba355087f4b3c38.jpg  
 extracting: train/ck0keop2ya51r072149ntr73o_jpeg.rf.1a46bfed89a41f84497024a0cb92a5a5.jpg  
 extracting: train/ck0kdsrnk4tm80863omawzddc_jpeg.rf.14503f76648fe065f44bb78f5fda01db.jpg  
 extracting: train/ck0ty7yegwn140721h5wixp2h_jpeg.rf.1e9156c2806a6b559d41373d8dd11038.jpg  
 extracting: train/ck0lxdhtohhdr0944vck0s787_jpeg.rf.1eb154d42831fe2579cab976cd270182.jpg  
 extracting: train/ck0l9j6n6oqjo0848ps5blk3b_jpeg.rf.224de57dc5e3861c454d5918c54315f9.jpg  
 extracting: train/ck0tyx4x7t16i0838grtlspkh_jpeg.rf.1d3ad943a3a41674718cbfd47cb81860.jpg  
 extracting: train/ck0t5k7tonwo20944nfnbqogm_jpeg.rf.207a19a57ece0d7697e0b35b8b55816a.jpg  
 extracting: train/ck0kn966blrt008482afk6mhy_jpeg.rf.213144d426379461b5074e533cb57aa9.jpg  
 extracting: train/ck0u0fzglungw0944luype68g_jpeg.rf.1d20ac26204e1f4e936624e42fbfb96a.jpg  
 extracting: train/ck0tswp1wtrfp0701idogph92_jpeg.rf.1f849dca8d13210f42e863c2d0515e1d.jpg  
 extracting: train/ck0ujwlr98tkt0848fudx9cb3_jpeg.rf.23d5807fa3614b2247c6cfa0612eb667.jpg  
 extracting: train/ck0u12y7v75dn0848omjgbtgs_jpeg.rf.1fdfc2988f335098543327d3ceb512a5.jpg  
 extracting: train/ck0tszlepw02k0721zr71djfj_jpeg.rf.19d7705296adb471ac5086c4c2d3e4e2.jpg  
 extracting: train/ck0u13xaxuv4k07010miwxisb_jpeg.rf.1f5c0ae5ca657b07df2010f367349005.jpg  
 extracting: train/ck0kl11aa7dlt0838zoqikube_jpeg.rf.1faf2f3f247eb7e0615537ffd9c7a9df.jpg  
 extracting: train/ck0t3htqxn88w09443dhqtzrh_jpeg.rf.24e475927902b18f79367eea652afe73.jpg  
 extracting: train/ck0ow39yt6j650a4633r8lu2g_jpeg.rf.23fabf19b5d54bbd8a7fa6f21c7030fa.jpg  
 extracting: train/ck0kdvg6tj4hn0a46k06job5k_jpeg.rf.227b89eacbda8866f4676723d9ba0f60.jpg  
 extracting: train/ck0kd2lpc7kvu0944pq8xomg1_jpeg.rf.1ef2995841cbae56a267cba0b663c42a.jpg  
 extracting: train/ck0twkzggsrnb0838b6na9joz_jpeg.rf.275724e3ab1fb46c39ccdd9a8f7d42e6.jpg  
 extracting: train/ck0t2zodcyvqs0a46wwxb53xm_jpeg.rf.27f1d7d5374dcf04d7f26eab1358a6ce.jpg  
 extracting: train/ck0nf4ks0js7z0848cku0h6t5_jpeg.rf.28079aa1ce895658c33c71326d6e9271.jpg  
 extracting: train/ck0nf4rw3093h0a886by1nwca_jpeg.rf.2981c56b39da0e5834ebc27960a98df1.jpg  
 extracting: train/ck0nabl6wgb5q0a46x056c88j_jpeg.rf.27e48e38bc5818db6c2ab2639105b160.jpg  
 extracting: train/ck0kosmizkqdq0a46xgz65fgt_jpeg.rf.2603478959f22852f577d117a7d549d5.jpg  
 extracting: train/ck0qbex0ljri20721b1v9gyp5_jpeg.rf.2990b57dd4f59608e7e092387933edf7.jpg  
 extracting: train/ck0tz7lcnugfi0944xj04x69y_jpeg.rf.2a0da955a0cab4902eb9adf2369ed3cd.jpg  
 extracting: train/ck0kdhymna0b10721v4wntit8_jpeg.rf.29b06dfa4d8305f129c392f9c89c13bf.jpg  
 extracting: train/ck0lyf2lnhyy10944t8eb842u_jpeg.rf.2a598ecec20c9722000d6ac0749b4511.jpg  
 extracting: train/ck0neurvn42jv0794al16dzhw_jpeg.rf.29f8da811f2c9b45609b1f725bc3b7f8.jpg  
 extracting: train/ck0kfnjva6jp60838d6vqyxq2_jpeg.rf.27b4f2be2ea1592665024230808cbfb0.jpg  
 extracting: train/ck0l7xwvu94160794ed6vldx0_jpeg.rf.2c389b161e7fd3ba64aa1c35e7bb8760.jpg  
 extracting: train/ck0ncji892we50794albpt7x4_jpeg.rf.2ac9736cb9bec2da2a5d364e0468ad53.jpg  
 extracting: train/ck0u155em6hy90a46mm9izvjz_jpeg.rf.2bb8c77fc46ee12945f11ce66b749730.jpg  
 extracting: train/ck0t4z1yrkmfv0794t2oqhd3f_jpeg.rf.2cbd2944f8c733826c485c9ed491a410.jpg  
 extracting: train/ck0ounuswsms108388flcbazi_jpeg.rf.29dc5b877496c71c0326834ae4f151b5.jpg  
 extracting: train/ck0u02mwv6bah0a467209caub_jpeg.rf.2b8a875d8b502635299f85bd50848ca7.jpg  
 extracting: train/ck0tywg9ruew50944r0qom5i0_jpeg.rf.2c81fb9ea6db57fca619fd649758a77a.jpg  
 extracting: train/ck0lxekamu4wa0848z3xssbek_jpeg.rf.2bc2e7311bbb85860c72bd4bf709cfba.jpg  
 extracting: train/ck0oumds8r2ws0794yqsnls8v_jpeg.rf.2de0b480d09720d8a5c389cfaeb6ab0a.jpg  
 extracting: train/ck0twzaqtua2r0701hljesqvp_jpeg.rf.2d5e187f1c489fcce3399eb8bdd7e9b0.jpg  
 extracting: train/ck0qc7df7fjr30863hqmtj5i3_jpeg.rf.2d162c86f8f5421abf70d7f81369ceed.jpg  
 extracting: train/ck0lyhdgvtl1m0a46g6vawhau_jpeg.rf.2e42021639bdfb42c3e691a71178ad9f.jpg  
 extracting: train/ck0tt3hm6tsh90701m4t0sq7e_jpeg.rf.2e2f1d7b0945cf6983a24af377165141.jpg  
 extracting: train/ck0ncrfj130c40794scji9fm2_jpeg.rf.2e5e69529b510bbbdf7ec0eccbdbfef8.jpg  
 extracting: train/ck0m1mrauhbpe0863qyvhhrcx_jpeg.rf.2ea8621636ed211a87071e1249f5699c.jpg  
 extracting: train/ck0rqqw9o6t8x07017cthtiom_jpeg.rf.2fe20c8fd0e8ec5c404b421f9c6705b2.jpg  
 extracting: train/ck0ng4fa263vn083809qkbt4y_jpeg.rf.2fabe55038c6e7ccda230a61c12fed57.jpg  
 extracting: train/ck0t20os6mrcb0944xrc0wamx_jpeg.rf.30995a73e9c9d57ae6ced0cb81c796a1.jpg  
 extracting: train/ck0k9dg0vjxcg0848rmqzl38w_jpeg.rf.2ee6be511c4dd076db5ab715cabe9f9d.jpg  
 extracting: train/ck0u01dbyuoe30701p6pfzb0q_jpeg.rf.2f9c8abdb27f4e25b055f6e957df7a9c.jpg  
 extracting: train/ck0qbsnz6fcn40863jl1deoe9_jpeg.rf.3119e59380e51099fb15a8f9fd88b8ef.jpg  
 extracting: train/ck0tzjni6sjt50863e6a6a8fl_jpeg.rf.2674502e727aa014851f77076f1b9970.jpg  
 extracting: train/ck0txsj116ojr0848emet52dc_jpeg.rf.30e64c544b6afa33a92d1d5b682ad87f.jpg  
 extracting: train/ck0qc6vn5htmh07016mtpli3w_jpeg.rf.30c5d5ac44cbf854ff0da08e205d2596.jpg  
 extracting: train/ck0kck2czj0l10a46f6xkvofl_jpeg.rf.2ef37f6dc2366721ade35555a7c60921.jpg  
 extracting: train/ck0t5243y02ss0848d4w8z1bl_jpeg.rf.33ba9c8ace58bedab4fdff2493193d9c.jpg  
 extracting: train/ck0na2ozl2ejz08633ausfi26_jpeg.rf.2ac426ddb3fba003fb525d251cd3414e.jpg  
 extracting: train/ck0kkxafs5wwc08631zxdpdyd_jpeg.rf.353d59e3e4c392a7ffaf16c65082cc8c.jpg  
 extracting: train/ck0lwlv9fjpfg07217xwy7qne_jpeg.rf.315251b5fb89502a9f414ed68f613ec7.jpg  
 extracting: train/ck0tsyvp361ow084867dinue0_jpeg.rf.34e009ab031423be4dd96112633d100a.jpg  
 extracting: train/ck0kewsaha6hh07215jgx1bp2_jpeg.rf.32085cfacfbc533913cb789c718def83.jpg  
 extracting: train/ck0kkwqeq5zxe07942ouxkr5q_jpeg.rf.341f13a4aee9d556c14bada70b55c5f3.jpg  
 extracting: train/ck0u9dw5dsmcv07947ss72195_jpeg.rf.32886e310697372320cc6eefd86d04a9.jpg  
 extracting: train/ck0na7z5a7deh0721q2xhjvl6_jpeg.rf.313f1dbbb2506bcc3cd0933ec750c98f.jpg  
 extracting: train/ck0t52natq0it07218mpku7q5_jpeg.rf.3663a4f28df9bbd5e5968a0f6699618c.jpg  
 extracting: train/ck0tyjzuxsewd0863g6vu5zec_jpeg.rf.377715a894a58aa7168fee47bf6b528a.jpg  
 extracting: train/ck0kosu846k350794gya7bot0_jpeg.rf.36c5b2b0c9bd6932261e36bac1244d45.jpg  
 extracting: train/ck0t2tlmmzfds0848mvj42ujg_jpeg.rf.2c0aca0093fc83871584e9bba387720d.jpg  
 extracting: train/ck0tyhb0zszbr0838w1dyisjw_jpeg.rf.3ab620b672a68320fd4d96d6b4961ff1.jpg  
 extracting: train/ck0tzpygswuae0721bhuvjofj_jpeg.rf.3914bebee5fe67b5581ec4ec8bb0f075.jpg  
 extracting: train/ck0qc8ib3k6v40721gt60o67m_jpeg.rf.37dbab639e7dd1da4b32c33653cf318e.jpg  
 extracting: train/ck0ujob3n85zc0a46r0pdmhoy_jpeg.rf.3c75e0df1d344366ad02092c1eac1028.jpg  
 extracting: train/ck0ng5zyb5lrz086320xihwm8_jpeg.rf.3ee4437733bd30040e8eee75412da1dc.jpg  
 extracting: train/ck0u96z1nxzlx0721hdcmet81_jpeg.rf.41681bd8c02eac778a62c26dfb0c60ca.jpg  
 extracting: train/ck0l9dfokcv7o07013t4kgamc_jpeg.rf.391f75ca3d4d2f54f0e16ad725f0b50d.jpg  
 extracting: train/ck0l9434gamzj0838n5rjdp1a_jpeg.rf.4116062c29f098f4c906863134b0374d.jpg  
 extracting: train/ck0kkz639le090848leseakdk_jpeg.rf.4019d7383e5c5881c19b8d516ecb6c5a.jpg  
 extracting: train/ck0kmrvn590jl0944asem2ofh_jpeg.rf.41acbf2d2730876e2ae550a54916346d.jpg  
 extracting: train/ck0rqrdr86mhp09441c8d69w7_jpeg.rf.3854db8388393710736ab415c4908fa1.jpg  
 extracting: train/ck0kkg80v5x900794hcgsb4e1_jpeg.rf.4047e023154f64d64b0f57543bb4fd80.jpg  
 extracting: train/ck0tzqdzqwucd07218blh12ir_jpeg.rf.42b35bfa7205a85b2de0e23b3f7df518.jpg  
 extracting: train/ck0tt28fjsc1i0838s2ji5dhe_jpeg.rf.424e1371bb40abea1f159f5f5ce69f66.jpg  
 extracting: train/ck0twwuxz6k9i0848br68e8wm_jpeg.rf.433fcbe6821acb23ddff4c685fcfdb1e.jpg  
 extracting: train/ck0t6p0chogef0701xhns6ujk_jpeg.rf.36c2901c1eb820dbfba6a73994046718.jpg  
 extracting: train/ck0qbakbhjoyy0721qw4fut6n_jpeg.rf.3efccea007b65adb2f89b9d81817e5f0.jpg  
 extracting: train/ck0t2gf1wypnz0a4651vbm3b5_jpeg.rf.437cc23454a6833068959ec292773502.jpg  
 extracting: train/ck0t5xpty0hyp08488g6a7n6l_jpeg.rf.4389de0431852c66f653348d9d652eea.jpg  
 extracting: train/ck0rqjda68xrv07215zq41dpi_jpeg.rf.3eae694ce2ebee9bf680df19b1732580.jpg  
 extracting: train/ck0kcbvwl4tja0794csyimfqu_jpeg.rf.4139c42c8ba15ec08c0c259d8d9d1458.jpg  
 extracting: train/ck0t2pyk3l0ym08636r4hqnty_jpeg.rf.43038a65e85c4aab3701c6e3dde8887e.jpg  
 extracting: train/ck0t6z888oklq0701ri4jser9_jpeg.rf.46e424378ec3c91f88828f07e7a90487.jpg  
 extracting: train/ck0m0ch9ugnna07940o8x989j_jpeg.rf.439accd57324d0807cb4560c14990ff9.jpg  
 extracting: train/ck0kky5oy7d3s08381yd0i465_jpeg.rf.46981ab14d4e4671c05c701ac6758a3b.jpg  
 extracting: train/ck0kmf5q58yik0944gdwwrqnt_jpeg.rf.3ef922d47bea7534ce7704b8fd9a603a.jpg  
 extracting: train/ck0qc7xskeu3r07948qg5ycne_jpeg.rf.49972425dcee32f312918be15de7be51.jpg  
 extracting: train/ck0qbrxq0fc870863u9dy1xsl_jpeg.rf.491da214f69a9f839531999505579e0a.jpg  
 extracting: train/ck0rpje8d32fg07940gwkympb_jpeg.rf.44f88957e8fe85b52ca38d0a7991e764.jpg  
 extracting: train/ck0ndqvn48v820721miq46qnl_jpeg.rf.499d9937f1ab51d921702cf552740050.jpg  
 extracting: train/ck0l9y6snotc30848s7ciyz6j_jpeg.rf.457695d886cddfd98ec3d4d60de1178c.jpg  
 extracting: train/ck0k9dzyzirme0a46fhirxayi_jpeg.rf.47488ef0c71fd078c1745939e36daac4.jpg  
 extracting: train/ck0l8cfmgahkk0838nbgj26un_jpeg.rf.49c40c34a1f5e6aa82cc2f9df919d461.jpg  
 extracting: train/ck0t45urenix00701imio8cdw_jpeg.rf.4a151efba4098c07c27033dc468f44eb.jpg  
 extracting: train/ck0t782sh0yo80848y50f5ugf_jpeg.rf.433ea431079a860cd60f6c12b8d20d7d.jpg  
 extracting: train/ck0kcju199xgd0721ljl6cd2n_jpeg.rf.472f20a2ef7858d9f8fc4dab0b39b450.jpg  
 extracting: train/ck0lxi2vsiexh0701c7jcu934_jpeg.rf.47e91433d70c8f1ffc86105773855a43.jpg  
 extracting: train/ck0u0i88bunv00944unevdbqg_jpeg.rf.470259e0cdc969d54cd0e56da6c2f51b.jpg  
 extracting: train/ck0qb93azjo480721ojf86ytj_jpeg.rf.48037d0b12f8239f35dee03e768535b7.jpg  
 extracting: train/ck0l9fr16nhu20a46eqv4fvr8_jpeg.rf.4a158ede8b3393cf947ea84c0762e3de.jpg  
 extracting: train/ck0kkv2jf8pil09448qnz1ndu_jpeg.rf.4c99bce4e79049d410ed99685b5b9481.jpg  
 extracting: train/ck0kdobka8hev07019b3gbgq6_jpeg.rf.501889aa5f337f4d29218a0159ec3024.jpg  
 extracting: train/ck0nfrec983h90701yvzbsczw_jpeg.rf.4f2f311801d2246d2ac2a329720eeac2.jpg  
 extracting: train/ck0u13mbhte0v0838nxz2zdkf_jpeg.rf.4e304678c3a4d073e70f5fed7b81fb6a.jpg  
 extracting: train/ck0ukk30lwlt9070113ad7m2j_jpeg.rf.4d77773c96711334024a7079a0a0d64b.jpg  
 extracting: train/ck0qc8qkohujb0701aq2988oa_jpeg.rf.4ff832412fae1226c6ce09763a4ffd69.jpg  
 extracting: train/ck0l8di4xoje408487bsrbpj9_jpeg.rf.4fd540845a7219aa411aa2533f2b7360.jpg  
 extracting: train/ck0kcnqqgk6li0848vp3bn5sx_jpeg.rf.4bc833ef89a2c08830c198787e098439.jpg  
 extracting: train/ck0tsx6f1rqmc08631bs71ker_jpeg.rf.51dd904477f95dd3f7eec182b57fac7f.jpg  
 extracting: train/ck0tsymgpsbmm0838ci72jvcf_jpeg.rf.46c0521fe9d69b8ed357771ef0fbc57b.jpg  
 extracting: train/ck0kd8vmh68nl083835g4ngoc_jpeg.rf.5358b8fd4520ff36e7d3cee68d74189d.jpg  
 extracting: train/ck0kcv9km4vc70794bkufs7xw_jpeg.rf.5147acee07ae33e254afa3b20cda8ff7.jpg  
 extracting: train/ck0qdhfh1uks20848pclbrty6_jpeg.rf.501c7370c661e2191d6ea8f98d952429.jpg  
 extracting: train/ck0nem99k9e9v0721yu2dtiqq_jpeg.rf.5426caec363c022f59af008ded90fe5b.jpg  
 extracting: train/ck0u0znmjuuay0701hyc6trun_jpeg.rf.53a913bf25f89c0e035d59046c781999.jpg  
 extracting: train/ck0t6nixgmcop0863zbzsc32e_jpeg.rf.5128a3eafb21c75c41bb7063ed3203c2.jpg  
 extracting: train/ck0kfljfp7vuk09441308dug1_jpeg.rf.566c11d29c6235196b9be66b632a7144.jpg  
 extracting: train/ck0kcoc8ik6ni0848clxs0vif_jpeg.rf.5640439604cae46fce614a40c3b86851.jpg  
 extracting: train/ck0twempn5ur80a462q4bosuo_jpeg.rf.506562e8f26d43e9801895289f264643.jpg  
 extracting: train/ck0t75ti6on9t07015p6wbjle_jpeg.rf.573daa61eb90b7708516c6e905e97ded.jpg  
 extracting: train/ck0lwfg8yjmbz07219irceyfl_jpeg.rf.545f83338c236e79a020ea5c0e9ac75b.jpg  
 extracting: train/ck0qb9do2fwc708385reizk6h_jpeg.rf.5ad879c508169b8849985fb0322b82a7.jpg  
 extracting: train/ck0kn6xvv6ds10794waw8k1tx_jpeg.rf.5795c1a6eb88cc252cb155df96a8f4f1.jpg  
 extracting: train/ck0rpaam92wcd0794b9wy12ks_jpeg.rf.56a904df4388c6d27519fb4425a6d039.jpg  
 extracting: train/ck0t51xn3ns6t0701kue9baog_jpeg.rf.5947dc01b2938fec4c4c93cdb21bbe68.jpg  
 extracting: train/ck0kot81t6k5g079474qsyev2_jpeg.rf.57920ecf15b32fe925f800a4889bc9b2.jpg  
 extracting: train/ck0qdf4hwkrzf0721pwn991sv_jpeg.rf.590572916272b36d9b159dbf787fd540.jpg  
 extracting: train/ck0qbedgvf3qu08637yvo02cq_jpeg.rf.59de5b6394a35819e8447d7a0b35aa4d.jpg  
 extracting: train/ck0u97a47vo2h0944rwxehqpl_jpeg.rf.5c1afcf6a44e6c78d176d4dbf18c746d.jpg  
 extracting: train/ck0l97837c0ta0944wnryuimb_jpeg.rf.5ca4909d29a3477e71b63f2fb07f35bd.jpg  
 extracting: train/ck0t5t7ggo1650944q3cyegrd_jpeg.rf.55672ea4681ec336bd177182aa9fc01d.jpg  
 extracting: train/ck0kkgwvk9hhg0701edmbvr7r_jpeg.rf.5dfbd639e5935bf106c60e88c60df6ee.jpg  
 extracting: train/ck0kmi067lmu70848mrnfi0rc_jpeg.rf.596088d4136dd9e1cd01232ed84858e5.jpg  
 extracting: train/ck0m14sd8vbk90a461shu6qbb_jpeg.rf.5701416bf0dda8e3d218a20d64b15d25.jpg  
 extracting: train/ck0rr7nhf6w970944cb2nnlih_jpeg.rf.5e05873b54bdb3d0740aac81018f0be0.jpg  
 extracting: train/ck0nfsgb3a5cl0721k678jfzb_jpeg.rf.5d08fdb7ae78b76efa0b9f9573312279.jpg  
 extracting: train/ck0ujo3wrv2130838pn66pvj4_jpeg.rf.5bc06214bf13e32fe43c7fc4aa57c5ff.jpg  
 extracting: train/ck0t5n2jlkxuc07940mygjo8f_jpeg.rf.53251e2e623aba60fe567417508520e6.jpg  
 extracting: train/ck0tzza3ut6ra0838aef1ubfa_jpeg.rf.5f516e21bf398766f578dba58be38004.jpg  
 extracting: train/ck0txwcn1scmo0863p2fmcq09_jpeg.rf.5de859cc484d0a19d5c8dd7dd25d6719.jpg  
 extracting: train/ck0nculry62mq0944n824zvzk_jpeg.rf.5fc878ac19683a7b8f56241a55e028d1.jpg  
 extracting: train/ck0t40rhdz68s0a46ekx049a6_jpeg.rf.557a4abf25420ed7de26776204356a55.jpg  
 extracting: train/ck0km64lvbavu0721bhfm918u_jpeg.rf.6052cc2835f6b229e93e3831c3548aa9.jpg  
 extracting: train/ck0uk4wf8875i0a46hztpqvip_jpeg.rf.5f8fc6c91e4d9bf46719e468fe6694b8.jpg  
 extracting: train/ck0k99r7bir3f0a460bctrlmy_jpeg.rf.60c8a3e6df7a89892f37628735d30b56.jpg  
 extracting: train/ck0km26n78w6b0944d8vapoyy_jpeg.rf.607d75d5bb3b7837239d7f96202e6ea4.jpg  
 extracting: train/ck0oul6eyrpa408637g72xnc4_jpeg.rf.61e5ea554d5e3d4d987724c750331228.jpg  
 extracting: train/ck0txvtlisck20863fbb7u1gc_jpeg.rf.60e4854c7d440b80a7050e80f5c8c386.jpg  
 extracting: train/ck0knw1mc96cu09446iq0ugae_jpeg.rf.5b580bf1ea842b404c28668d2440e029.jpg  
 extracting: train/ck0rqkhjb4o7k0863lhsglv38_jpeg.rf.625148aa0bd8be10b03b5075da92642c.jpg  
 extracting: train/ck0tzkkd8ulz70701xe5upkxe_jpeg.rf.5764521112251a0e2ba064cb5027927a.jpg  
 extracting: train/ck0kmewv08ygm0944br9j606r_jpeg.rf.6278c352631da1f58205844ea61e448e.jpg  
 extracting: train/ck0kd21cfj2810a46eicmjtng_jpeg.rf.63240d8bfefc76c2f7f0370589be87cf.jpg  
 extracting: train/ck0kmi7k47m0i08380dwkjuc1_jpeg.rf.682a79c00e7e5f9fa8a51010e98453f0.jpg  
 extracting: train/ck0knae84lryw08485s94co4s_jpeg.rf.69158b4e481df043d0b41096418068ae.jpg  
 extracting: train/ck0lwg0retsfg0848hqi60tlt_jpeg.rf.6a4050512fcce18bf71a78e5f67480da.jpg  
 extracting: train/ck0ouldkdrpic086327fu1wjk_jpeg.rf.668822f81e97a787458043d3030ca983.jpg  
 extracting: train/ck0ukhrm7wh9z0944dld49omk_jpeg.rf.694405b9bddc6c72dd59c604b56882c5.jpg  
 extracting: train/ck0kl2hwy5xu80863zi6j6gy4_jpeg.rf.68240262316a48b7cfeafaee3ecd0183.jpg  
 extracting: train/ck0tyt8nmui9l0701fr3lsu7s_jpeg.rf.700c44ca035c17a36db5b3f055ffdd43.jpg  
 extracting: train/ck0rr0ik14xk30863c291xijp_jpeg.rf.6b0ee01d45f7a3813123e5208d17953a.jpg  
 extracting: train/ck0ovvrza6e0e0a46doaio091_jpeg.rf.6be0ea36c14cb5b1ef37ed6b6baf2e19.jpg  
 extracting: train/ck0t1zwemkseu08636i7fpksi_jpeg.rf.6ea1342d39b68594ce956bfffe45de1a.jpg  
 extracting: train/ck0kmem9m68vr07947lx0jam1_jpeg.rf.6bad01df4b24ef4b0984bd76679a0ae3.jpg  
 extracting: train/ck0t7hsj2on2s094444drdqs0_jpeg.rf.6c2793199df6ddcbba7859de1e789ca7.jpg  
 extracting: train/ck0u0e1zzwydk07210xx5o9vv_jpeg.rf.6c7fee02ba632a91476e2898772dfb97.jpg  
 extracting: train/ck0u07w91ulq009446ol1bta4_jpeg.rf.6a95622002ecf38c4eb63ad18a9c0b21.jpg  
 extracting: train/ck0m15edijrbs0944olo8aqra_jpeg.rf.6b4f54fcf5c1108ef7535aee8d047b6f.jpg  
 extracting: train/ck0ukkz8tytsn0721y70sud46_jpeg.rf.6fe63258f755ade455467adb529c4a78.jpg  
 extracting: train/ck0khubxx5khq0794ja58vgv5_jpeg.rf.7062b06c161fce813a44a33f4699cc07.jpg  
 extracting: train/ck0t22nfcmwoj0701cj69tol6_jpeg.rf.71a34ebf9060a80d99366871940a006e.jpg  
 extracting: train/ck0kmtk7c90w60944bjhqhm4j_jpeg.rf.71beb73b2221fc9e2831229162155865.jpg  
 extracting: train/ck0m1lu0nhb320863zyoofeiw_jpeg.rf.714d817070326a2c6005c17c4df7f5d6.jpg  
 extracting: train/ck0kkgh0ik3y20a46kzrit8v2_jpeg.rf.7250f86212631f0291f6bb00ce888e3b.jpg  
 extracting: train/ck0rp9iochje10a46pq920puz_jpeg.rf.729f87c246f93d3f2136e331550e45bd.jpg  
 extracting: train/ck0txp6s8ue5b0701zx9qjwwe_jpeg.rf.7323b5f5fb810c2323c190f7014be39b.jpg  
 extracting: train/ck0tyk5ussex90863mbb4h9of_jpeg.rf.75829f74265397efd9d64d09eb245728.jpg  
 extracting: train/ck0km5x3u9r910701d3yfjhtv_jpeg.rf.74e3e83a2502abe156eaecc79c167924.jpg  
 extracting: train/ck0tx2mh0r4wk079479dvqbfl_jpeg.rf.732991e4d719bb8b294e20a918182083.jpg  
 extracting: train/ck0rpixgt5x8l0944oojfpa2o_jpeg.rf.6c7572b354b3f97266a8bb0ce4ce01c3.jpg  
 extracting: train/ck0ndy9p207ai0738n7khflbr_jpeg.rf.766234243899b3428377fc07c6369429.jpg  
 extracting: train/ck0kmr5ewkhpq0a46nuv73id4_jpeg.rf.74f2b644d55c48f025ffeead3ae7b5d1.jpg  
 extracting: train/ck0nfo0y4a2a00721ek5g056r_jpeg.rf.71033dd2bf0be37a6d27e27bf0148bc9.jpg  
 extracting: train/ck0txo8iyue140701jnymorwf_jpeg.rf.73d3eef1f5fba69eb4ee7ec7fdca29c6.jpg  
 extracting: train/ck0lygt2vkh030721ufglqzq9_jpeg.rf.764b2540f8689e96368f458f860ac7e0.jpg  
 extracting: train/ck0tx2twm5xya0a46rieclafo_jpeg.rf.72f6fd749d0632e667cdaba3de923580.jpg  
 extracting: train/ck0u0y7byrnvd0794vpqdexro_jpeg.rf.76e4eda8229fefea21082bd5fd7cd7b9.jpg  
 extracting: train/ck0kdc75e4wgn0794wltgjwdw_jpeg.rf.7885ecf52ed48339714e0b1e5c5a79fa.jpg  
 extracting: train/ck0kkzzufk6zy0a464la26n1c_jpeg.rf.76f46cf256cbd51965c691f7cbcbe4eb.jpg  
 extracting: train/ck0t1umd5jowc0794my89kbsj_jpeg.rf.764ab3f02c226e1903d522a50043fd51.jpg  
 extracting: train/ck0kcaec57i7s0944jkpjyecx_jpeg.rf.77ab92b4838acebe3c46d1442f63d57c.jpg  
 extracting: train/ck0qd50hwi1k10944i25e7al9_jpeg.rf.7d08dd5bcc7de226b6c827595f14bd5c.jpg  
 extracting: train/ck0lx50eojwbq0721qtix2xqg_jpeg.rf.7a04599abfa9ee74b24fa3d1ab6bc4aa.jpg  
 extracting: train/ck0tyojsawouz0721wteyaxe4_jpeg.rf.7cf2a750405d80504ded0b5bd4c9e5a0.jpg  
 extracting: train/ck0t7cnx9n84208386ufrbnj2_jpeg.rf.7cd70c16665195fdc856ecc450c97526.jpg  
 extracting: train/ck0twvs9zu5t00944ltnvijnz_jpeg.rf.7d8f74f476408afdae7e2709cf3c0481.jpg  
 extracting: train/ck0tx9epawjk50721w6282v2c_jpeg.rf.80f3230b481013e63bf8b8419d5c4d40.jpg  
 extracting: train/ck0lxcldkid0h0701q2hvnu98_jpeg.rf.80af3ad59294a5da195b9b9e1359c218.jpg  
 extracting: train/ck0rrpwlvj04d0a46r4th0cjf_jpeg.rf.7680e5d4b9c4fe2369007c34b035314d.jpg  
 extracting: train/ck0ug2zaqt3zf0794joucwcgf_jpeg.rf.7ed1acf9e2d941155609d08873a91c12.jpg  
 extracting: train/ck0tssirhrq1u0863eoicr5bp_jpeg.rf.7a415d1f7c473ce87d957c7d786b4d13.jpg  
 extracting: train/ck0oulkapr2bj0794zz57fbjp_jpeg.rf.815eddbac8c2ffda65a85c956d1105f3.jpg  
 extracting: train/ck0tzphn3t5by0838ge18vkq7_jpeg.rf.801414cb6a1f87b900c75f4c139a82f3.jpg  
 extracting: train/ck0twvkofr3wr0794xzv7br97_jpeg.rf.806e8522d2653527fe9880f2c6823824.jpg  
 extracting: train/ck0qd918zic840701hnezgpgy_jpeg.rf.792baa9515a8053871c3a871b430cf66.jpg  
 extracting: train/ck0km415y8wk90944mab9i4lr_jpeg.rf.8259dab536267612deefe186ce316b72.jpg  
 extracting: train/ck0tx3rohwiml0721wn578q0e_jpeg.rf.80bdd0c51231554676c19f1b650953df.jpg  
 extracting: train/ck0ujxwiv8tpa0848i4mm92da_jpeg.rf.8246faabbdd3d51150ab98a28e7e9bdf.jpg  
 extracting: train/ck0ng67nbkhul0848gacp299b_jpeg.rf.7a85ee5b9da4d79f730a04d50da2feea.jpg  
 extracting: train/ck0l8e146nb1h0a46oop6lane_jpeg.rf.83ff83045cf848d0ff3f0706bf8704ed.jpg  
 extracting: train/ck0nf3wev47fm0794xfr1xuhs_jpeg.rf.83951583cc9d2d978fa1782383de0503.jpg  
 extracting: train/ck0ujkl8y85nx0a4678ht59jh_jpeg.rf.841afb04e877a70da9998a2a736b446c.jpg  
 extracting: train/ck0lx68q2szgt0a460zlk49kf_jpeg.rf.8455c6b562c361972d283ebdcd81d56a.jpg  
 extracting: train/ck0kdh8yok8rw08486abmn3z3_jpeg.rf.8494e2dcb74fe6d6ddb04ac9b2de7889.jpg  
 extracting: train/ck0kmh9hhkg5e0a46we9ywaxl_jpeg.rf.82aaa7d85d434a7226a096f98228e82c.jpg  
 extracting: train/ck0t81ysfnhui0838yucfeszk_jpeg.rf.8489aaf9ce6df1455d3fa8789ebe257a.jpg  
 extracting: train/ck0ngmuykjkh80a46a5qzv263_jpeg.rf.8856b44d7edd00979562d688c940ce40.jpg  
 extracting: train/ck0t5tq5hmo8508389vjkpile_jpeg.rf.84011346fd211b241806bbb345abdc5e.jpg  
 extracting: train/ck0kdklhl8h7h07015yka75gg_jpeg.rf.849f2d39e268da4ee71099264242c668.jpg  
 extracting: train/ck0ukh756v4e308383pqia4e4_jpeg.rf.81e239e323423da261679d8618f465f8.jpg  
 extracting: train/ck0u0hkwqurd207012dbpgjhf_jpeg.rf.8811cf730ddf7a8647f1cb0a17cca50d.jpg  
 extracting: train/ck0ujvtjvyrjx0721f1bk9yvy_jpeg.rf.8b8ce1c27490cd2d22598cb6f8bf480e.jpg  
 extracting: train/ck0m14kd9vbfd0a46f2gcpvjf_jpeg.rf.8ad808c85683c1128360aef34583832c.jpg  
 extracting: train/ck0lx4d8sg0df083816ox14r4_jpeg.rf.89b0e2ef6667178d7ab6f4fc3cc87d84.jpg  
 extracting: train/ck0t7s2o0ne4408387zvqn6pr_jpeg.rf.88d5bdf115970e3047ee10fcabaa456a.jpg  
 extracting: train/ck0u086m36ca30a46uxou346g_jpeg.rf.85798ce85671bb48ba1155cbe4fef0c4.jpg  
 extracting: train/ck0rqph585aki0838q1hxpf6k_jpeg.rf.8c93f690ad81ae95a7cb4104659f2463.jpg  
 extracting: train/ck0ug3gjz7wk00a46ikj346zv_jpeg.rf.8da74b34161b281e97df9f9b622e220a.jpg  
 extracting: train/ck0kdox8w69td0838i6leuzy9_jpeg.rf.89b1eade40e9285996cde006ad6dd782.jpg  
 extracting: train/ck0qc5zvxk5mi0721s96pwrwt_jpeg.rf.8ee7ec9fa11b81d4e1a03a6c9fbbc826.jpg  
 extracting: train/ck0korwq7kq730a465vnctqhc_jpeg.rf.8e1aec47cfb6d4ac7ca5d5214b0b5511.jpg  
 extracting: train/ck0lzyygqj00t0944qnwlwjf2_jpeg.rf.8ccb26f020aead4ccaddc541ff3fab21.jpg  
 extracting: train/ck0kp6k1p6lxe0794by4thy6r_jpeg.rf.925c56de7bb12d800824fa4a6f36b61e.jpg  
 extracting: train/ck0ukglh8v4be0838snyl10yu_jpeg.rf.8bfae2997eabad61281269539ba45526.jpg  
 extracting: train/ck0nene10jjcw08481qmhh4wj_jpeg.rf.8450eb6778c54885a77eece496286e93.jpg  
 extracting: train/ck0u03onjuosl0701rhy18f7o_jpeg.rf.9170d54dd205936c4dfbc091b9eab929.jpg  
 extracting: train/ck0rpemia2zhr0794cdnvuzck_jpeg.rf.902f9994d4f5df11d669289c2c3c8934.jpg  
 extracting: train/ck0tzncx5rg970794klhrot7p_jpeg.rf.94150ff3e8ab4c38f6522f2b2df1629c.jpg  
 extracting: train/ck0tzxdcpslr10863f9172aju_jpeg.rf.8dd675f56cc8d19f87ae5c5dfa2f76eb.jpg  
 extracting: train/ck0ty797gwmyd0721weeb3b90_jpeg.rf.921db5574dcdfe9cff18896f5abe6ff8.jpg  
 extracting: train/ck0lwextoec1a0863lou65q4l_jpeg.rf.93418ceac20f54764930690b4cf96974.jpg  
 extracting: train/ck0t3s74cnfks070126deykt2_jpeg.rf.9481f66e504f416471429c37bb247f24.jpg  
 extracting: train/ck0t3r8twkazh0794uxyta3xz_jpeg.rf.94af7536a0042cbe83eef8e44fedd15a.jpg  
 extracting: train/ck0m0gpkagpzy0794ulwtreu6_jpeg.rf.8f819532b8ad9776e98984bc495245eb.jpg  
 extracting: train/ck0u101es74sl08484y0z9pb6_jpeg.rf.95dfe33bb33ca730c717e640262ada80.jpg  
 extracting: train/ck0ndfo8l6ov10701vn095htn_jpeg.rf.8f7ae2dcff0a515bcf1c718d4d63c0ab.jpg  
 extracting: train/ck0lwirdeh6zk0944n5uj6q6i_jpeg.rf.964a9cd1bedd6fb2cf556503f6d6df94.jpg  
 extracting: train/ck0rpe8ja3z6m0863q1narygs_jpeg.rf.96564ed87f0e5d60022c96e5fc74e0b4.jpg  
 extracting: train/ck0kl3hgk8qyh0944gjtwyh6h_jpeg.rf.96071c07d4ba52378eb7f11e50eaca1e.jpg  
 extracting: train/ck0l9xvyaot920848tv62nd7o_jpeg.rf.96a71f7824cacbe0480fc730c7cf2aed.jpg  
 extracting: train/ck0u0raa96fpd0a46z626vh9b_jpeg.rf.96aa7d175582d4557a84a883cb2cf439.jpg  
 extracting: train/ck0kkzh767dc70838hdq9sv4u_jpeg.rf.966c6b48dfae9d40cfdd8944120fe318.jpg  
 extracting: train/ck0oukl7vtoje0944m2ijpxlm_jpeg.rf.99cb35d91df2bfc513fe0052066f53cc.jpg  
 extracting: train/ck0tsvw19vzj90721zgwt6zcu_jpeg.rf.9c2f829f2e3d6cfa3901a9a75d607cb4.jpg  
 extracting: train/ck0t6o7hrlbs407944kyd1i2r_jpeg.rf.9aea679e2d9ed54eaa4acfdcf9aa3ee4.jpg  
 extracting: train/ck0nfoxrpj0xh0a46qv68gp45_jpeg.rf.9c9f0e65a50410de79f12c0f488db83a.jpg  
 extracting: train/ck0rr5vjd72og0701evwv8adg_jpeg.rf.9cae13ae1e284d3abf25fbcec88a6838.jpg  
 extracting: train/ck0twb97n5uc20a46en2e2spb_jpeg.rf.96f6907f22d194eaab0a93a56df83873.jpg  
 extracting: train/ck0t2z9knn3060944ppcr9fkn_jpeg.rf.96b8e39218a072a1006af2de995dfb0a.jpg  
 extracting: train/ck0tx3bpm5y140a46ul6hlru8_jpeg.rf.9caa18e8049fdebcb383611299ebb52a.jpg  
 extracting: train/ck0khu2xz94nj070165dkfirk_jpeg.rf.9d9659d83d5a6e9dc9020335724f690d.jpg  
 extracting: train/ck0tyvn3lwprq0721btsf3icr_jpeg.rf.a0ea6e49ba462d34b04fdecfa387f861.jpg  
 extracting: train/ck0uk19kvwftk0944hsc0vul0_jpeg.rf.9d3e025247de1bb8de697fcf3356e9e7.jpg  
 extracting: train/ck0ujedh4wiln0701x3dugshl_jpeg.rf.a1defd2bef27bf1a734d6a1dcf2eb12f.jpg  
 extracting: train/ck0t1r1esms8s0701jmlvghkw_jpeg.rf.a18d5a589003de6a22d67f8f614db65c.jpg  
 extracting: train/ck0qb9rg6tgo40848b6pmbrod_jpeg.rf.a0b5b20442525acb6a3770e6a799aa85.jpg  
 extracting: train/ck0rp8yjl4f5b0838yi4z8ub1_jpeg.rf.a33bd89fd79b7b9935ce980990e536c8.jpg  
 extracting: train/ck0u14y7arp5c0794llhscme5_jpeg.rf.a38f84cea273f0a9c1626b603068a4bf.jpg  
 extracting: train/ck0twbzbz6hga0848z3sx5lsi_jpeg.rf.a37e016fd9116803b8863972e23b86f7.jpg  
 extracting: train/ck0rr6bfa9b3w0721aw5unwdy_jpeg.rf.a268cdc966ac28e7c78e80de8edccd8a.jpg  
 extracting: train/ck0l967uheggs0721rz56fgry_jpeg.rf.a48f9e63ced669c2f4d42b786c75b985.jpg  
 extracting: train/ck0na43fc1vfh0794pxban58i_jpeg.rf.a515f95a9175dd7a8195cbaa06c8fb54.jpg  
 extracting: train/ck0kksymy5z9j0794x0i3rdvo_jpeg.rf.a3f10995d7db7e4d81cc414279c6f832.jpg  
 extracting: train/ck0t252udmxqz070188sd9z1u_jpeg.rf.a8b6d5f0fd93a38cbd0e4875916e9f13.jpg  
 extracting: train/ck0u0hs5pwywo0721zyjfy75h_jpeg.rf.a84004e53b6bea60aab4421ee83b89d6.jpg  
 extracting: train/ck0l9vmuparhg0838oe20j0bd_jpeg.rf.9d1eb2cf64355a55e35a46abbc89c12b.jpg  
 extracting: train/ck0kd4afx8g470701watkwxut_jpeg.rf.aa553b9f8dfbc67ad0d1b2726b18843b.jpg  
 extracting: train/ck0m0etrphwjo0838r9wzd6ia_jpeg.rf.a8c62f9658218d2f878a1c6594c2a19d.jpg  
 extracting: train/ck0kcqhwf4qtf0863lv432phe_jpeg.rf.a943ff9e98da0ddcfd96c60721673f93.jpg  
 extracting: train/ck0na7jmkg9gj0a46p7jofg58_jpeg.rf.a10bf27ad4ccc085eb3a86d4270d0849.jpg  
 extracting: train/ck0kl3wpd616z079452ea8lq2_jpeg.rf.a648e5379f3e3824035776fc92cefb69.jpg  
 extracting: train/ck0nfobvt4msi07944oj7bzx6_jpeg.rf.a5c13b6ee16dff6234317fa5f78e613e.jpg  
 extracting: train/ck0kcurm6j1k50a464wsovbw2_jpeg.rf.ab69ccb149df38fc7e65f9d90f215017.jpg  
 extracting: train/ck0tzuao2wuuv0721d9vb3uvm_jpeg.rf.aaa42036babb0f27b44902d68e46dd48.jpg  
 extracting: train/ck0rp9zh85rbh0944g0fanl6o_jpeg.rf.ab0bb4006eaa131ff488750355d3d2cb.jpg  
 extracting: train/ck0kepk22a57e07215s22fyiv_jpeg.rf.ab8d31168b7ab01bbbf55e6bc886533c.jpg  
 extracting: train/ck0kmpmy67n4j08386u8554cs_jpeg.rf.b049e3b237a6ea667ac8a5ceadf05c1e.jpg  
 extracting: train/ck0t4pic2zxuv0848rp6npxiw_jpeg.rf.ac856743b74eb2575d32207ee087cd01.jpg  
 extracting: train/ck0ujle3ouhd20863udrln8yl_jpeg.rf.acd992113f343774fb20300242bc492b.jpg  
 extracting: train/ck0kkizhblbln0848zygsqz0h_jpeg.rf.ab36aa82e39c9d3bd610047f935c390a.jpg  
 extracting: train/ck0ukgesfwh7f0944smkfpncd_jpeg.rf.afdb42091b7da037cf83ea64b808ffb5.jpg  
 extracting: train/ck0l9d0tt981g0863dmttkl8h_jpeg.rf.b0adccb1f962c0eff97046f7c28418aa.jpg  
 extracting: train/ck0kkjnqo5xx10794xzpyur0y_jpeg.rf.ad19411bb6b064c3c72ec88641d98805.jpg  
 extracting: train/ck0t6flogo9bb094428fjegnm_jpeg.rf.b254cec83edf6dd16ae6d28b79bc9c64.jpg  
 extracting: train/ck0lxa18hic880701ux5axl97_jpeg.rf.a25f90e94f17f4c2b34334ff88857712.jpg  
 extracting: train/ck0tzpp8dskmt0863jjiyujwl_jpeg.rf.b0673abffd9e0f9dcf29a2bf9e0c80e9.jpg  
 extracting: train/ck0nf44ililfz0a469cimeiu1_jpeg.rf.ad9680bd25c0d7c0f1a50a1aa88da7ea.jpg  
 extracting: train/ck0khttiojrab0a46z2tgb5rd_jpeg.rf.b2120689f5e47195185575be3a852a55.jpg  
 extracting: train/ck0l8bqss96gu0794pchtppli_jpeg.rf.b157a972186d6301db0812f43cbdbe53.jpg  
 extracting: train/ck0kmhpz99t9w0701ftiu8jlx_jpeg.rf.b401f020fe7982218a18c1ab0a04413d.jpg  
 extracting: train/ck0qbaubof1z40863w0scvpra_jpeg.rf.b1aeac65207e61c7bdb8af93ca3fe217.jpg  
 extracting: train/ck0uklx72wly70701px5wl6fw_jpeg.rf.b14b447ff9427ea868bdfe8772f9961b.jpg  
 extracting: train/ck0t4s7ibnk170944ffcd9q57_jpeg.rf.b3a8df5ca4dea61ef63bc93442032eb6.jpg  
 extracting: train/ck0kfk23ukj1m0848e93um98b_jpeg.rf.b3e866a44b2bc05dd0021a9cbdd9b25b.jpg  
 extracting: train/ck0t50qh5zfpc0a46o2ppy49w_jpeg.rf.b298fc4bd4a42d20a1410743a8d64c85.jpg  
 extracting: train/ck0kengxx4xiw0863hvii9d2r_jpeg.rf.b447e07790afa8ed6f940ca86f624fe0.jpg  
 extracting: train/ck0nexpddii7b0a463cb74ti7_jpeg.rf.b3ca2a40805216d2e543bf07d87882d2.jpg  
 extracting: train/ck0k99e6p79go0944lmxivkmv_jpeg.rf.b6b76a81fc9ef865f8cceb18e0e8a95a.jpg  
 extracting: train/ck0k9aqm99o2o0721aml8qpqr_jpeg.rf.b18b5e0b7650cd43064de30ce2dc4ce9.jpg  
 extracting: train/ck0nds16a43eg08630h6n5un1_jpeg.rf.b5b92c23a84001ab4fee9b993cb59e04.jpg  
 extracting: train/ck0t2kogeyqy50a46u6e56yb9_jpeg.rf.b622b932fa3d1515b9d386afa1f31f01.jpg  
 extracting: train/ck0kdpfyk4t8v08633b9u5zhe_jpeg.rf.b65908160b648eb406f68bd68b40b835.jpg  
 extracting: train/ck0rqq0y3j4tu0848p884qvn1_jpeg.rf.b8f0440a5d2647a630fda4cf15478b77.jpg  
 extracting: train/ck0ukjupkwhfa0944r6gu3mh8_jpeg.rf.b68cd912abe993f75a3fae278e75f66f.jpg  
 extracting: train/ck0tyq31o6ryb08488lus3geb_jpeg.rf.b5e1584c0b3f6efbf22c28acb2fcca2a.jpg  
 extracting: train/ck0t4rrmsnjuy0944xt22ilg4_jpeg.rf.b64892575179851900618e01c722c869.jpg  
 extracting: train/ck0u0obru6f6u0a46axncdx9n_jpeg.rf.bfba4f699831b2c5c67296b310d5952a.jpg  
 extracting: train/ck0u0jzlqsqbn0863uii3azjh_jpeg.rf.bc0f103cbf4850fda85ac91df069ebe9.jpg  
 extracting: train/ck0ovrdff7ir20848kelskyn8_jpeg.rf.bbfe1e7fb041e857cbe5a9b9bdad2fcb.jpg  
 extracting: train/ck0l9jo4f9eey0794uap3bxz6_jpeg.rf.baa9e31d3ba73cf19195c84257f95a13.jpg  
 extracting: train/ck0qba5yvh45x0944qw0oaamw_jpeg.rf.b68450e60a952c420a5e5a20e4a26ef1.jpg  
 extracting: train/ck0knvsi36h390794sjhcf0ri_jpeg.rf.be529ab37e9a997710b1df960e03c429.jpg  
 extracting: train/ck0t5vqyazukh0a46v2hq9td6_jpeg.rf.c0045088bd584dbc22f12c40e2a9ef2c.jpg  
 extracting: train/ck0owo8knvext07012gernxb0_jpeg.rf.c4f46630880ec0d6a6be647628c36abd.jpg  
 extracting: train/ck0rqiv7l57ak0838i1l3rovg_jpeg.rf.c68aca203f59d104a7b362c16b10b767.jpg  
 extracting: train/ck0t2wfsipdoc07218586ezt0_jpeg.rf.c81e880620c7915331390348156cbb19.jpg  
 extracting: train/ck0u0zahz74ln0848lonhk1xg_jpeg.rf.c825f6e8749d9f06ce60afebdcf7afff.jpg  
 extracting: train/ck0t1ywwdmqkf0944mhiw5huk_jpeg.rf.bb09f099c083d4ca1d68a2b3e19cc054.jpg  
 extracting: train/ck0qc86fnk6ox0721cb5kg3tl_jpeg.rf.c5c3d6575c30a08716e5de66a3cca49c.jpg  
 extracting: train/ck0ukjmceuk3t0863k4o3i8g4_jpeg.rf.c8710353ff56ce50a063969d391a6d4d.jpg  
 extracting: train/ck0txx0ddscoq0863bm1ntw9g_jpeg.rf.bb458fc3c5946c056fa797b6537303f2.jpg  
 extracting: train/ck0k9lqaz4ict0863typf3ngd_jpeg.rf.c626fe6d1b5e765b97e60570559524b2.jpg  
 extracting: train/ck0rp6xx53tvu0863dmbu45el_jpeg.rf.c91abce8a50c96ff1da410ca98d65674.jpg  
 extracting: train/ck0khrg9xan7u0721mn5rxpzo_jpeg.rf.c88cf2c88fb625e8b584f55a75ff062e.jpg  
 extracting: train/ck0na8r4s7dor0721s8pn4uvo_jpeg.rf.caa5f5ef44e7edefc9ee223151e3d4f1.jpg  
 extracting: train/ck0nf53eo78rg0944btmudhsc_jpeg.rf.ca31cf0718f6a17daba3a888e2164d69.jpg  
 extracting: train/ck0kl4d8y7e9x0838i33y0nta_jpeg.rf.cb497a258dd1c3292aab4a15d2412909.jpg  
 extracting: train/ck0t24ndxkug60863id6f8pfc_jpeg.rf.ccbcef9684f1821d4225cd7e948ddade.jpg  
 extracting: train/ck0u030x8uono0701kp4dvddz_jpeg.rf.cce0d76d24cb5fd7bd855b43bc522ebd.jpg  
 extracting: train/ck0m15mtdwdk30848aglnguy1_jpeg.rf.ccdb1fadd2f2d6555f07c7f07c42dfef.jpg  
 extracting: train/ck0tzinm26vvi0848hrzkprh2_jpeg.rf.cbf262021df5069343f704f19a31b394.jpg  
 extracting: train/ck0kk7o8o5vu70794mj0deixq_jpeg.rf.cc574ab6dfbaaa956e756589543bc322.jpg  
 extracting: train/ck0tszu61qmo10794stj6lmkj_jpeg.rf.cd0adeee2703543bef04719d692a4d5f.jpg  
 extracting: train/ck0t3zt3lnd7u0944gqvvuxtw_jpeg.rf.d0785d01523a3077f61992531c876aa4.jpg  
 extracting: train/ck0kkxngib3qh0721x6mgyttk_jpeg.rf.cdad6dd32da6fa9e83474463bca121c0.jpg  
 extracting: train/ck0kotq5d99760944cc8w1pib_jpeg.rf.d18b5583e53b48b26dd0fa93abe80653.jpg  
 extracting: train/ck0kcubsi4r530863gt6or3em_jpeg.rf.c6e16da38c3bc801977dda3af449ef17.jpg  
 extracting: train/ck0kfm0ie6jf7083851froruk_jpeg.rf.c82834a68a51bff4355cb286712f3a9d.jpg  
 extracting: train/ck0ujoiltwezp0944u48gk6wn_jpeg.rf.cedf910d14f8fa75af29ee530218c8a0.jpg  
 extracting: train/ck0lxgqhheq2b08638gu6tna1_jpeg.rf.d3bcbf13319e9085cd99cc70f591a78f.jpg  
 extracting: train/ck0l8drn7ahtr08388pnkoq7o_jpeg.rf.c99e7915a820c30f4d621818882dec76.jpg  
 extracting: train/ck0l8b13joiy30848vj3d2ndb_jpeg.rf.c87ce8c72c4a00bd54f577bc589881ed.jpg  
 extracting: train/ck0ndugzs6hv90944q18viki9_jpeg.rf.d7d59c2232a9279769338c2c49842ac7.jpg  
 extracting: train/ck0koqsqta39h0701f6xas96g_jpeg.rf.d317ccd24e2de58bae846f23970ca2db.jpg  
 extracting: train/ck0lxe157jywu0721jwuvnom0_jpeg.rf.d52939232d88081a1205ba5945d1ff15.jpg  
 extracting: train/ck0na66fi7cky0721kx2rj68t_jpeg.rf.d673d3045f89642df58f4ef31be5a8c5.jpg  
 extracting: train/ck0nevwgw73ne094401pqo8zp_jpeg.rf.d41245abbd039c8457e3757b199abe1e.jpg  
 extracting: train/ck0kcz0wr67xc0838pqo5w48a_jpeg.rf.d8095ef8e3cbe8cbb157744801a59987.jpg  
 extracting: train/ck0u10zeh74ym0848ow29lc1l_jpeg.rf.d18e46da6163d82f3829573cafe1561d.jpg  
 extracting: train/ck0nfms4c59l108632rf19agc_jpeg.rf.d4f43b538e8ce6bf6bc37b1b09c4cd3d.jpg  
 extracting: train/ck0tyaitfuchg0944iz87630d_jpeg.rf.d8499455c3bd9d65d2026a42459495e8.jpg  
 extracting: train/ck0t30z1wlpg60838dc6e4lcx_jpeg.rf.d8ed7d41ffa6eef14637473b406b2025.jpg  
 extracting: train/ck0qbn4c2to820848neklprj5_jpeg.rf.ceab3a03813f5fc926ab0ddc914b7e21.jpg  
 extracting: train/ck0ncwmnt6hyp07016zypssxs_jpeg.rf.da4913a6b09aac33accd8f523656c4ea.jpg  
 extracting: train/ck0txfme8sat90863ybrk8lzv_jpeg.rf.d967cbd3eddc6cca96f477742ae6c568.jpg  
 extracting: train/ck0kd7h8nj2m70a469e4k6x92_jpeg.rf.d951a76880f9a44fb0d0e5669aa26e7a.jpg  
 extracting: train/ck0t3kjngndaa0701i76i8q2c_jpeg.rf.dc730b44ff0c4d4bc5c5494eff34921b.jpg  
 extracting: train/ck0tzttgusl7o0863k0kfvcid_jpeg.rf.d78139834b83d68a489ca1a43ea12a88.jpg  
 extracting: train/ck0u095lit8ti08381w4noa0x_jpeg.rf.dc85e577689a994aed0c51c8bfe65264.jpg  
 extracting: train/ck0kcso214r0r0863m2otgmja_jpeg.rf.de892b4fc865e8511ae6a9727c5a1b1f.jpg  
 extracting: train/ck0txgdlcsvs8083883yrxqar_jpeg.rf.da6626ee50f339275f96f42f5ace4f7c.jpg  
 extracting: train/ck0l9uumy9b4i0863izsuks74_jpeg.rf.dd9f07775ff4341b2a721c2c4b7ad996.jpg  
 extracting: train/ck0nehpd69bax0721onacbe33_jpeg.rf.de5aa5eb74129ec7ee9ab6befd741224.jpg  
 extracting: train/ck0u0gs1d71m70848obwsy04a_jpeg.rf.dd2a6b062df00bebd3233bb0dc926cf8.jpg  
 extracting: train/ck0tt357asc4k0838zbkl0wrd_jpeg.rf.deff7ae2b2edc0843e678e7db5e50d46.jpg  
 extracting: train/ck0t58uo4nvw40701ubn1fua7_jpeg.rf.da82f2586df3d76f7c0974aefda7ab92.jpg  
 extracting: train/ck0lzxfw9iyx50944qfn9yxxl_jpeg.rf.dfa4427b1626739fe26cc761396a35ea.jpg  
 extracting: train/ck0qb8cz6tfnt0848glf38bpe_jpeg.rf.df4767d55a2c6cb1c7bb240fbdce32ab.jpg  
 extracting: train/ck0t6c6eg0n3m08483m1afytd_jpeg.rf.e111a13cfdd91bfc01e0e69cf746186e.jpg  
 extracting: train/ck0na2cdf1uf40794qwk129rm_jpeg.rf.df6e2475211fb959a3d2631114002cf5.jpg  
 extracting: train/ck0t7yaokmuhl0863v1rlsm5p_jpeg.rf.e1345cebf796a253004000476849f30a.jpg  
 extracting: train/ck0ndxi6w6j5f0944tbx465xr_jpeg.rf.de4d46f15976fcb9025b55b333b42873.jpg  
 extracting: train/ck0kcy2pmj1v70a464cl53bkw_jpeg.rf.e468d352269078fa687d0d0e54fdd95e.jpg  
 extracting: train/ck0oulyr8sldl0838tkv3hqk8_jpeg.rf.def549053bed684a341a7ea2072303da.jpg  
 extracting: train/ck0u0p0aix0240721eyb5n57o_jpeg.rf.e44d5a70aa3309c1b22977039d3f65e7.jpg  
 extracting: train/ck0na20uvg69k0a46b7cgr9rd_jpeg.rf.e39fd3b54a13ff7a093d448cba05aa02.jpg  
 extracting: train/ck0rr4nj43za20794o7o3btuw_jpeg.rf.e42aba116e04193e742853bfebe4568a.jpg  
 extracting: train/ck0rp8d7p858s0721qrjixerv_jpeg.rf.e567ec4d344fa2034da5280f529939be.jpg  
 extracting: train/ck0tt2r6arrbl0863z4d7kz9g_jpeg.rf.df36cdc0dc85a15f6bbf8e3bb3c2c371.jpg  
 extracting: train/ck0kexd04j9w30a4628x0rzxi_jpeg.rf.e6db6bb8b5905f48f764b2e416166283.jpg  
 extracting: train/ck0oukw23toun0944nstsyx8g_jpeg.rf.e280f7ddd862a588fbdc124824166276.jpg  
 extracting: train/ck0uk5rawysf20721e35n6nk5_jpeg.rf.e74dc8aadccce261f5212c0142bf012b.jpg  
 extracting: train/ck0klcevilg5n0848axl4ww0k_jpeg.rf.e0ddbf07ab35b2f531ed01ae06a0f309.jpg  
 extracting: train/ck0qbnvn6hk6w0701x0mcjvq8_jpeg.rf.e18dfce3e37263a3263304aa86d1b3c1.jpg  
 extracting: train/ck0ukhfkfv4f10838ulu89xc8_jpeg.rf.e0e551dd99ed3da9c0ddd3feb27d553d.jpg  
 extracting: train/ck0k9jss09oxq0721c8pd7eds_jpeg.rf.e6fbbf1979f75392a258399f0e134056.jpg  
 extracting: train/ck0t2b97en04q0701ndyfepcr_jpeg.rf.e8ac293ed7bdd2d8cff3a2e9e1e6fd11.jpg  
 extracting: train/ck0u0m0dqtaz70838kcm0o9w3_jpeg.rf.e94d05141deaf349d77f2e7e813a5ed3.jpg  
 extracting: train/ck0t5nqi3zqqj0a46iyhhhw7b_jpeg.rf.e78a6aa3e29e79f3b15371423ec16d4c.jpg  
 extracting: train/ck0tz35vrt1xh08382lzzsg2v_jpeg.rf.e6758b9cca6c8174cb7342723d408027.jpg  
 extracting: train/ck0tz2vh7wqqw0721qe4odmlu_jpeg.rf.e8d94d9555b12560a302c62c5d9752c1.jpg  
 extracting: train/ck0ujirqowixt07012814w3ox_jpeg.rf.e8c1441f0038462f143f530f3c596050.jpg  
 extracting: train/ck0l8bhz891hy086381o84nhd_jpeg.rf.e83e0911fe203cdbc6905640bdb1ad6b.jpg  
 extracting: train/ck0rr132a3x9r0794yhwyho26_jpeg.rf.e4672575b7a31d4dfbc4aeeaa92ce115.jpg  
 extracting: train/ck0lx3s62sysz0a46pq7rsuuw_jpeg.rf.eb5d10de10c7f75105b75ceaf0767b17.jpg  
 extracting: train/ck0m1m206hgy907944qxlvo5i_jpeg.rf.e98cc78516cffe8ad56310b2d779453c.jpg  
 extracting: train/ck0qbp5dkg5920838gygl8n4f_jpeg.rf.eb931f7a8095ed99e0a5f5e41984b16d.jpg  
 extracting: train/ck0txus39scfk0863py0w15qb_jpeg.rf.eb74f268fedb48c9f3f6b0ce6bf0237f.jpg  
 extracting: train/ck0uk75x5ysls0721e5a9j891_jpeg.rf.ea04df0130dc69b5e4a9550191507012.jpg  
 extracting: train/ck0qd7oxtgwug08385hhoh6c6_jpeg.rf.ead363873677eaaeed8adf0a352ef000.jpg  
 extracting: train/ck0km4pkh640f086302no99ma_jpeg.rf.e9d8610de8560cdafa0a18fba65152ec.jpg  
 extracting: train/ck0tsw61gqm680794v186apwr_jpeg.rf.eca86519a4a4f21b3ff6db8545f92146.jpg  
 extracting: train/ck0qd4sisfz1x0863riapvr9k_jpeg.rf.eca8c590311edd42c5bc31cca8d0ce8d.jpg  
 extracting: train/ck0kfmg3p6jh90838qxnikor1_jpeg.rf.ecfd46bfce011ee44d94c3bb4edf2a03.jpg  
 extracting: train/ck0kn9jfn6eb50794lzp777i0_jpeg.rf.ed8bf7af3068f0036c6b4829b01d5f57.jpg  
 extracting: train/ck0ovkzw8secm0863zxuv11su_jpeg.rf.ed281a2f9f03a00d24c5901491bf54ab.jpg  
 extracting: train/ck0t73htzoi4l0944t1g4vr7e_jpeg.rf.ec32955cdd79f7976cf6622f3165600d.jpg  
 extracting: train/ck0nd98ujhlfg0a46hhco18rf_jpeg.rf.ec27cc43b08eb8565ca4be291aed9e09.jpg  
 extracting: train/ck0lwh1i7ed810863a8ql1pat_jpeg.rf.eec459d4adfb8dff6926837879a8bc07.jpg  
 extracting: train/ck0neh5qnjfoh0848s35xyyj8_jpeg.rf.ec3ff48978667fbcdf53103a0b63f95d.jpg  
 extracting: train/ck0ujb0z98s2k08487aup4zlv_jpeg.rf.ef2dcc9dade3f362a6c3a9456c8e75c6.jpg  
 extracting: train/ck0m1maf2vnl30a46vnjvga3r_jpeg.rf.ef651554e7f09b83c560ce8eaf8f3da2.jpg  
 extracting: train/ck0rr40pt71p40701fl50vzfa_jpeg.rf.effc550be13874ac97410b8aa94fb555.jpg  
 extracting: train/ck0l9wcc5cyhb07010g94nwxz_jpeg.rf.f02371605985defd8e830a02e366620c.jpg  
 extracting: train/ck0kdxyxd8iat0701jbwd0oq6_jpeg.rf.ef6b595254585c3391c8a1537819dca2.jpg  
 extracting: train/ck0kdwh3o6akx0838c87v3jbu_jpeg.rf.f0512ea4940a1ac4c5db7a38179c6df5.jpg  
 extracting: train/ck0kdgfrn7loi0944bc5njxd7_jpeg.rf.f2537112e17986adff7628ab6a7681ed.jpg  
 extracting: train/ck0kfhu4n8q7f0701ixmonyig_jpeg.rf.f3e14fb506a0a6c6e9f4ba59a491ec5f.jpg  
 extracting: train/ck0ndbtmr8oa50721hzgxpue8_jpeg.rf.f27104815eb591d8e95a3754a586f730.jpg  
 extracting: train/ck0u0nf3ftb7n083861jt62o2_jpeg.rf.f03f3202dff2ad8dc496482c5a4982c0.jpg  
 extracting: train/ck0qb7vltsvwp0a46e656bv6u_jpeg.rf.f36472957ead6cb375ae2eaa0549602f.jpg  
 extracting: train/ck0koqfpl6g0x0863mmghhzwg_jpeg.rf.ecd88f984ff5e164c8a9f355895debc7.jpg  
 extracting: train/ck0ngm4b3jk5e0a46nplsw4xb_jpeg.rf.f49a3ff9251f9abff916db9d112d2194.jpg  
 extracting: train/ck0kosbce7vpn0838c5telfw3_jpeg.rf.edbb96602bf938e52609b627bd729647.jpg  
 extracting: train/ck0tyxvkx6t1d0848kk4vfkrb_jpeg.rf.f4ad75476215381f764b21ec279e763a.jpg  
 extracting: train/ck0nct2ia48au0838bivpmmn6_jpeg.rf.f937069b711e5c495eda9b7923871421.jpg  
 extracting: train/ck0tyucu3uekj0944f3usafki_jpeg.rf.fb3e670247444d3c8f0c83d93f803551.jpg  
 extracting: train/ck0txgnxxud4y0701d5z7lpx7_jpeg.rf.fd00bba24c26729c93ae3af84ba48544.jpg  
 extracting: train/ck0rpfqrnhnon0a46jv1r8s8w_jpeg.rf.f1e765c8b391b0c8c815bd18c0391075.jpg  
 extracting: train/ck0uk2i9z86xl0a46gizlif59_jpeg.rf.fbb859a507ad69f0cc36066c090ec601.jpg  
 extracting: train/ck0t5wu9ao70b0701rdv7c7af_jpeg.rf.fd5525717d622393fd722e3c106ce875.jpg  
 extracting: train/ck0ujxg4qyroe0721yt57ymqa_jpeg.rf.f0fc2d84ae3238d21a84c47b08e7a337.jpg  
 extracting: train/ck0txbqlewjuv0721agjyhu78_jpeg.rf.f92da468b8aedd74be246dbd4ad99a28.jpg  
 extracting: train/ck0ujfqoy85bw0a46u3lgcx88_jpeg.rf.fcf3309bab01983fa6e0dfc2063c970d.jpg  
 extracting: train/ck0lwj354jo7n07218v6vgpvi_jpeg.rf.fbedd5547941bf66b36cbda9bb2afa1e.jpg  
 extracting: train/ck0t734s0mizv0863sujq0i7p_jpeg.rf.ff63be3811a7667ec5365a921d5dc98d.jpg  
 extracting: train/ck0na9q14gaaa0a46ghbnn940_jpeg.rf.fe17fd7f4b94ded2c5b31c71d01a533a.jpg  
 extracting: train/ck0t3m8z9z2js0a46b4ije63c_jpeg.rf.fd151591ae6db6862ab57cfa19b21266.jpg  
 extracting: train/ck0ujmglz85u10a468qt0d4fc_jpeg.rf.fdceeb24db54ec15c21bea6f5d1d1f8b.jpg  
 extracting: train/ck0l95c8fegck072116xdbwp6_jpeg.rf.f94feab5c4a875451d3b6f03050f4de7.jpg  
 extracting: train/ck0oukbtwsjxp083897f03qo4_jpeg.rf.f81a39f1434e16567455a1771b527364.jpg  
 extracting: train/ck0lx9tk9g22n0838e6v5csum_jpeg.rf.ffff88bec68574c3117902ac782378f4.jpg  
 extracting: train/ck0tt2h2btoz109449lxjh5hi_jpeg.rf.ffaee511b6c3ed3df43f4dfb9d0c53e8.jpg  
 extracting: valid/ck0rqflbw4lvd0863gyi0mzp1_jpeg.rf.003130caeaf341000bf36ab7fd9aa47d.jpg  
 extracting: valid/ck0qd6v51ib0j0701sf2wolfq_jpeg.rf.00e6e27a4de4d2a728b53837d11f4067.jpg  
 extracting: valid/ck0lxaxgyeo150863mih7ypb7_jpeg.rf.019e4d20fecab1bebd4defd85cd047b9.jpg  
 extracting: valid/ck0keprw9ke1d08483vmle7ed_jpeg.rf.0c59cf25a85a2d618de86c17f7878263.jpg  
 extracting: valid/ck0tzh0mmsjb70863m9jdqypg_jpeg.rf.0d2f1888879c88f2474761e9e819224a.jpg  
 extracting: train/ck0t5sc8zo0px0944bab8o9n5_jpeg.rf.fe96630693e1fb3e416e23a491378ce6.jpg  
 extracting: valid/ck0tyzb4kwq9407210i0lbk6v_jpeg.rf.0511abe5eadd1644a80c80bab0d04908.jpg  
 extracting: valid/ck0ug2o6ku8b708632bp2y2zn_jpeg.rf.1389f6c727b5066c1534075867ecbf62.jpg  
 extracting: train/ck0tz4vu9wr2707210cs2d85i_jpeg.rf.f34c6f80ffb1c91a59d75965bfffcd8a.jpg  
 extracting: valid/ck0tz3qdjt20v0838pjjz36dn_jpeg.rf.12e4e558ef6127a728227217f1acae67.jpg  
 extracting: valid/ck0tx10f0r4n40794pbbiu6iw_jpeg.rf.111fb6f72c866e6b678bc9189024982b.jpg  
 extracting: valid/ck0qc6lwsetj70794bennfrza_jpeg.rf.13cc49ba2bbfc90481c9c8c2530f91b0.jpg  
 extracting: valid/ck0tzox41umlv0701crxo159j_jpeg.rf.12e82307666b20fe8d047338cf851d81.jpg  
 extracting: valid/ck0qdeht7feba0794xskx01vd_jpeg.rf.16c8b74f6faec8ef491e9bfbc35b5486.jpg  
 extracting: valid/ck0lwhatvfrid08386w2352zc_jpeg.rf.1d644be733b2fb75b30b2224d592c0a6.jpg  
 extracting: valid/ck0tya1tfr9sz0794w5roonwp_jpeg.rf.1bcde903f5df4b6f51b48f147775611b.jpg  
 extracting: valid/ck0kdeqv8a00m0721x8avt6qe_jpeg.rf.1a3a2e06703e6e1f1fe469da1bfe0e51.jpg  
 extracting: valid/ck0ug3zlfw61d09445i888ftl_jpeg.rf.229b0bd9232ffc86bdbe88c04b50d908.jpg  
 extracting: valid/ck0kovirm99fg0944fdv57hd8_jpeg.rf.1ed1c9333dea6f97a7302935f1ab0bc8.jpg  
 extracting: valid/ck0ndrnhi3fkn0794vbn53c08_jpeg.rf.1d86c7f5fd1db562fa60bb79118faa7c.jpg  
 extracting: valid/ck0kkta309jh807010cayyl5d_jpeg.rf.227468ecb9bed4e2f938bae7e1428a65.jpg  
 extracting: valid/ck0nefzg24hv30863efhq3fp1_jpeg.rf.14f7ac2b7c058014a6c113f2cb19f296.jpg  
 extracting: valid/ck0lyghgkfe8a0794h5pz92p7_jpeg.rf.29429e271947f8a174bb6b146538da0d.jpg  
 extracting: valid/ck0rqgv1m8wok07216g2getrt_jpeg.rf.13d97a8166acd09e221ea1dcd20f270a.jpg  
 extracting: valid/ck0nett9672gj0944feegfre7_jpeg.rf.20f23fbbb84fb89a27ca9eab0012a198.jpg  
 extracting: valid/ck0tstact5eu60a46dakdr243_jpeg.rf.17501c026ccf43d04c3b7e08e244f499.jpg  
 extracting: valid/ck0m0dejchvnj08381f1xtpqs_jpeg.rf.29ba58734b479942ace992fa4fc0d3ec.jpg  
 extracting: valid/ck0uivtpc841h0a46ydgo7566_jpeg.rf.3021152c9464e3633aabd1cf5e7b17e0.jpg  
 extracting: valid/ck0knam319ybv0701clozevt3_jpeg.rf.1624fd7a54717bdef36b3e41a11563f8.jpg  
 extracting: valid/ck0kl34jm7e2s08384qcp85ih_jpeg.rf.2cd874add1fc8d697249faa275aa490a.jpg  
 extracting: valid/ck0kl3bjt5xyz0863ru5wbw27_jpeg.rf.24de8b66657969733fb2e2276295a9e9.jpg  
 extracting: valid/ck0u98ba1xzpe0721rgjjjun5_jpeg.rf.2ce44f440156965bb60ce676a3c3909f.jpg  
 extracting: valid/ck0t52agrlp0p0863vg9f8svy_jpeg.rf.305fa397fafc80043d9d2a0a4ca056ca.jpg  
 extracting: valid/ck0t6v4mw07810a46gj9vqfvu_jpeg.rf.30481f2c8f578481dfe3be1109ba89ac.jpg  
 extracting: valid/ck0u00wpmuoae0701e4j4rgq4_jpeg.rf.32e8ef7d269acbc37a605d795e0aecfc.jpg  
 extracting: valid/ck0twbr4hwf330721j3vv8j30_jpeg.rf.2fbf0b5034451be09eafe56bda681374.jpg  
 extracting: valid/ck0l939lbnftj0a46atyx4hh4_jpeg.rf.2d8b55172a9d8bd325ee9777a8e8d5d9.jpg  
 extracting: valid/ck0ng3zz98boz07013zyds00f_jpeg.rf.2dfa112ac197ebd522174896a1180a8b.jpg  
 extracting: valid/ck0k9etuqjxhh0848k6i1mw2f_jpeg.rf.2d23995a300bbf3317ad3856b4055068.jpg  
 extracting: valid/ck0txrkyx611l0a46u3b4n8bb_jpeg.rf.3350225c17e75bff58dc3f2911d29460.jpg  
 extracting: valid/ck0ujvlapv2j60838394p7rtf_jpeg.rf.36ca58cd5228048e1573215cd3a319a1.jpg  
 extracting: valid/ck0kkep0vb0nz0721g3sq0a4g_jpeg.rf.3b4cb57f5a365e204beebd7fd2d472c3.jpg  
 extracting: valid/ck0nd5toq009k0804lhknr6zt_jpeg.rf.4928f72ef871062b858411147943d130.jpg  
 extracting: valid/ck0lwgsmatsrm0848br00vfki_jpeg.rf.4072ecc34a45613e837d523399e4bfc8.jpg  
 extracting: valid/ck0kfjen48qhj0701wjkosmel_jpeg.rf.366c22d19ec5f233f90e87f276583255.jpg  
 extracting: valid/ck0u0dtzbt9mu08385juqmknt_jpeg.rf.36db1dca5d6cf1bc4c23be87e8c3ad70.jpg  
 extracting: valid/ck0u00h8m6yow0848u1pohlak_jpeg.rf.3f5fd1993d512fd40548335544a484df.jpg  
 extracting: valid/ck0lx7plvg1e50838ps3cuu9h_jpeg.rf.4372d5b05cda11790941f3e343b4f13d.jpg  
 extracting: valid/ck0ovnfap7g9u0848nlx6v59i_jpeg.rf.51c6c83db09831872ad16455837d4634.jpg  
 extracting: valid/ck0txxd8f6p0l08484tupben9_jpeg.rf.448c991aee75afeea2dcb3cf2b37e4eb.jpg  
 extracting: valid/ck0qd7zf9fax10794ucj3kmt5_jpeg.rf.3f9ba2cf4534f3052eb39c06f5c4b9a4.jpg  
 extracting: valid/ck0txs402uehd070195c9mfko_jpeg.rf.45f5211fade12a55e5748a5cd8fd3210.jpg  
 extracting: valid/ck0ounp7qrrb80863c0xidr4j_jpeg.rf.4d54c7499886edf4b4673983710e6d98.jpg  
 extracting: valid/ck0t4kdl1nlyb07011sj8qjd8_jpeg.rf.4546d9c41fb7e19330e8c4a8f1b4b587.jpg  
 extracting: valid/ck0lwjms6i2ru0701i4l78d84_jpeg.rf.53abf4cded7fda07a787b2b8f50cab92.jpg  
 extracting: valid/ck0tzu01r69vw0a464th4yn43_jpeg.rf.5712db87cfe5d8d72f69a634f208bf2d.jpg  
 extracting: valid/ck0ncvnd332hj079400vhocib_jpeg.rf.53f060e84b90f839e8753d1d916a862b.jpg  
 extracting: valid/ck0t3r291z3xb0a46a6hx1rak_jpeg.rf.58273fa02318edbfef04f9d52e530b42.jpg  
 extracting: valid/ck0l8culveaqb0721xkucococ_jpeg.rf.4b5a5984284faecc90a231ae232e64b9.jpg  
 extracting: valid/ck0rp95tf2vg70794wbjk2kk3_jpeg.rf.574b105f28e5aa717ef035c87196f194.jpg  
 extracting: valid/ck0twl6pqwg6c0721ja81m52m_jpeg.rf.5182b42450d4353486021029ad9d4fcc.jpg  
 extracting: valid/ck0rplig25z1n0944kpecwx3o_jpeg.rf.66a23d6b36842e9b0bd2d1937f1ae264.jpg  
 extracting: valid/ck0qdgpy2ksso0721wbiaedz7_jpeg.rf.4d252908a46e6204f1e7ebfea7757035.jpg  
 extracting: valid/ck0t7iosiorq50701wgtb048m_jpeg.rf.662e214765ba29d65ac19f133feec0d5.jpg  
 extracting: valid/ck0kcn7u1j0vk0a46lr8si8zl_jpeg.rf.6da76054485d2bcfa55084d13d7db847.jpg  
 extracting: valid/ck0na2zrshcs50848i2690yin_jpeg.rf.6a732822132f524daf31abfde9c222d7.jpg  
 extracting: valid/ck0k9ghqt7a8l0944mcvy0jsx_jpeg.rf.67825b9ba5a3d04f1d82a53e137bdeb9.jpg  
 extracting: valid/ck0txwk8k6owz0848gejqchq8_jpeg.rf.6b546d28cefb13cc7d879aa28ba8114e.jpg  
 extracting: valid/ck0txrr4nswy40838d5s9tydp_jpeg.rf.68dbe110f2ee344a0d2bb9444fb46148.jpg  
 extracting: valid/ck0txsbnlsx040838wocuy68k_jpeg.rf.61cd80c8a4d00d51b2dd6c3969049b20.jpg  
 extracting: valid/ck0ken8886dvt0838isatschp_jpeg.rf.69b6e75cdc1e411f6ebc8d8d234db7a2.jpg  
 extracting: valid/ck0ncrvqe47qd08384wqzjlk9_jpeg.rf.70feba2052df9f42ed143dd45df8b880.jpg  
 extracting: valid/ck0rp7ech84jq0721s8wfw8tz_jpeg.rf.77e7c2cf754bbab577faf14eb099422f.jpg  
 extracting: valid/ck0t59c0wnw4607010uyziiam_jpeg.rf.80f69b996237e2c22b9a326e0db4f358.jpg  
 extracting: valid/ck0u13g18uv1x070166k2ew3t_jpeg.rf.790af3fb9e107b40a31f37eba15929c1.jpg  
 extracting: valid/ck0t6yokymh1c0863qvlkbezb_jpeg.rf.85ef1e18472133aa515769ce456a62ae.jpg  
 extracting: valid/ck0rpg5vz5vk20944kk0yhjy5_jpeg.rf.76c375a1c340f6ebbfda95f5e3c7cf9a.jpg  
 extracting: valid/ck0ndgh043xns0863o9d4l0kk_jpeg.rf.6e385ac73bc99b95e70714c61ad9bcaa.jpg  
 extracting: valid/ck0ujy91ete1p0794u8tbqbox_jpeg.rf.8df4e1a46a54cfbca6689709287a0417.jpg  
 extracting: valid/ck0rqlovu6jid0944lyedc9vj_jpeg.rf.8aa13b19d0432b8297434722a1a79afa.jpg  
 extracting: valid/ck0tyvz1p6ssd0848z1brl0fl_jpeg.rf.7e9718ab0f55e5c07795be06b69ae9e5.jpg  
 extracting: valid/ck0kn8eh393ib0944lkiizn3w_jpeg.rf.9ad055420d7d71d55734e398d0035210.jpg  
 extracting: valid/ck0m1n0c1hbus0863ti6kd58t_jpeg.rf.8fce5e573d281ab87959e9aae33890e6.jpg  
 extracting: valid/ck0tyexsuugj1070158ffdyyt_jpeg.rf.7525daf79d89a288ea7bf19f3f9780fd.jpg  
 extracting: valid/ck0khrpvr6w7p08381txzz4na_jpeg.rf.8f294cf80a64cddf7da4112600a67068.jpg  
 extracting: valid/ck0ndu6n1059n0738t860ip47_jpeg.rf.82d2bbe775bed52d43260ee4373402a0.jpg  
 extracting: valid/ck0t7o8ovlpsl0794ci1p3k3a_jpeg.rf.8b92550facb269be3ba46175dfd74c4b.jpg  
 extracting: valid/ck0u0iymarljc079432rld6ur_jpeg.rf.93835f0ef8742bc0c77ad239a9a4e81a.jpg  
 extracting: valid/ck0kn591g92v60944tmg09bpa_jpeg.rf.a2d7efb6f31de71c55ae5963f935dab8.jpg  
 extracting: valid/ck0kmfd7ibcib0721v08q1jv0_jpeg.rf.96deebd38f2e3112028be99cc1b03dfa.jpg  
 extracting: valid/ck0t6epc3ql720721yfbt878q_jpeg.rf.a6ebdded016ae73fc9613b86b70a138a.jpg  
 extracting: valid/ck0qd8gs6ko7j0721x25cv4o3_jpeg.rf.99dc97dc586087c47ccc96a636673e92.jpg  
 extracting: valid/ck0ug28sbusje083872me6leq_jpeg.rf.a61369cbeb830c94b3a26a20747c658c.jpg  
 extracting: valid/ck0ujik07yqmk0721cwatmrh8_jpeg.rf.a2957063df63b8f34ed18607970f91e5.jpg  
 extracting: valid/ck0lxhudct3ac0a46446v80ie_jpeg.rf.afc51612be60cd9c6c0b56452ea82c59.jpg  
 extracting: valid/ck0tyt1orui7t0701sxkqcrnt_jpeg.rf.9311ba9ba953a29a278e1a5a1be2ea4f.jpg  
 extracting: valid/ck0tzp3rvummy07010t4dolzz_jpeg.rf.a1ba12001587c996f91346982ec2d76d.jpg  
 extracting: valid/ck0nd5l7girkp0848rd5cwpcq_jpeg.rf.b05331cfc8fcb03290a4bdf221afbdf7.jpg  
 extracting: valid/ck0ovlupvx57v0721dn80vmur_jpeg.rf.7a6b4c420cd724d0715a395213c94921.jpg  
 extracting: valid/ck0nfz8fe60yr0838az9x331y_jpeg.rf.b3c8bfaf3be2e88e9c9508b31fc4e82e.jpg  
 extracting: valid/ck0ty9cx4r9qk0794foscpnun_jpeg.rf.b1a46b376416c587a8d7b1aa72256b9e.jpg  
 extracting: valid/ck0nftsdy5xck0838qb0yk262_jpeg.rf.b2ff869e411fed69eeea6c3cc195a545.jpg  
 extracting: valid/ck0kewjljj9q20a46zd3hi5wl_jpeg.rf.bb2c327a92cd9aedbca359e3f91c37f0.jpg  
 extracting: valid/ck0txg39r6n250848puinmu6k_jpeg.rf.b2ff6cc3acc7babdb1326539653b21e7.jpg  
 extracting: valid/ck0t235u3msgt0944a7jzn8hp_jpeg.rf.b4f4fc849a14e5dec7bfb1a69b2e7e6e.jpg  
 extracting: valid/ck0khrzkh940s0701k1i2tfe3_jpeg.rf.b33aebddacd19713079902faf164bcfa.jpg  
 extracting: valid/ck0m0fi9vgpgr0794ibqomk6e_jpeg.rf.b78805dea966fdf4586b5f9bab43ed68.jpg  
 extracting: valid/ck0rp7ve43ul20863ldcq568m_jpeg.rf.a5dafadaa42446cad5bcc2b41e9af8c5.jpg  
 extracting: valid/ck0kcnh1p4uns0794bs53wqam_jpeg.rf.bd452f11cdcc5a5de8d5449e55ab7c16.jpg  
 extracting: valid/ck0na3ee97aa6072160kaqfv8_jpeg.rf.bf5f497462fee86ac4deb5e59af44467.jpg  
 extracting: valid/ck0t6zwah09e20a4695tura09_jpeg.rf.bf34287a305d856467feab0b4a181ab2.jpg  
 extracting: valid/ck0nabd8whgv30848prwciv2b_jpeg.rf.cdb9da21bbf80db027fe9f2d6f9adb4b.jpg  
 extracting: valid/ck0rp6q4583zz0721mrh6corb_jpeg.rf.cc8e2a8c76ee32040f643adfadb1772e.jpg  
 extracting: valid/ck0kd8g4e9zka0721qprbwtsa_jpeg.rf.ac6d3b740545b2fa1568a49bb5e44e52.jpg  
 extracting: valid/ck0nfpjig5u9z0838mgcq82oz_jpeg.rf.d8fbcf9528807c4cdb12d120aaee4120.jpg  
 extracting: valid/ck0qbo5snhkcv0701pwtwt35x_jpeg.rf.ce4e77fe230d637064e079ba0e3f92f4.jpg  
 extracting: valid/ck0u0l462sqjj0863jgtyd84b_jpeg.rf.c1214a76febb09f14fe0e18769a01246.jpg  
 extracting: valid/ck0kmp3qy67dy08634jubsyx8_jpeg.rf.d969fd183c8667e2074937680f1b88e8.jpg  
 extracting: valid/ck0na55fy5b720701gr5xzev9_jpeg.rf.cf3dd352713c97131fa6ba4913c58797.jpg  
 extracting: valid/ck0l9zam2nl6c0a46g4g3m6c4_jpeg.rf.c28fd1ad4dcbdb5a6e3a66ae69d0d75b.jpg  
 extracting: valid/ck0keo0re4xll086378211i65_jpeg.rf.c820b7909713ce9c4c122c0ef2a0da78.jpg  
 extracting: valid/ck0u15ceurp8s079478ps17by_jpeg.rf.dce1bdad966bff02ccffcbaefd029b25.jpg  
 extracting: valid/ck0ndwu8pj3180848zonjasmz_jpeg.rf.d9c8ba14cd9ccbeaa8309e5c7b1eb05e.jpg  
 extracting: valid/ck0tx0999u6br0944s68atde0_jpeg.rf.da3d89a14cd97e48c9af5942cca4c0a5.jpg  
 extracting: valid/ck0ujw76nui2g0863hz7l7uqo_jpeg.rf.defe272e88df8baac0a275fa5e55c533.jpg  
 extracting: valid/ck0rqkoz2587n08387o17rv73_jpeg.rf.d97982a326f9f52d351ac518465021ad.jpg  
 extracting: valid/ck0lx9h4ejxlm07213g6pz1n5_jpeg.rf.e0802b5892a400cd2f555736e1b73f6b.jpg  
 extracting: valid/ck0txw47261iz0a46k76oyfzj_jpeg.rf.d91df306fb73d6040cac733f716429fb.jpg  
 extracting: valid/ck0tyoro0uhpt0701sbzpdha0_jpeg.rf.e08d37a36c09b50153d865a7e451035a.jpg  
 extracting: valid/ck0tyv5y2wpox0721volr4z26_jpeg.rf.e0eebfb0d70641e0d9700f668ad20781.jpg  
 extracting: valid/ck0ty5rb66pry08483lgrlmhz_jpeg.rf.e0f4fe3a205edd3534295e896fac4562.jpg  
 extracting: valid/ck0tyhipjwo1707211p8ov4bm_jpeg.rf.ea9be414798a1eb143b3a472b8921634.jpg  
 extracting: valid/ck0nctujv31lz0794ow6lj285_jpeg.rf.d1c8fe91268537c93085339e4d0cba9f.jpg  
 extracting: valid/ck0txmpuosbkk0863coq4p42u_jpeg.rf.e2eacba8cdcaf85ad7f684bb69b1e37d.jpg  
 extracting: valid/ck0t2d3kbyou40a46a56kl6x8_jpeg.rf.dd26f2dea719e3dba1a0ee76affd5c3d.jpg  
 extracting: valid/ck0t7cczuqxtt07212nj5eznh_jpeg.rf.ead640a52361fbdc6fa67f0f5655a29e.jpg  
 extracting: valid/ck0tt0dkysbs70838q7t3vnr9_jpeg.rf.edfa1fae053ea1b4dc6d752e636522a9.jpg  
 extracting: valid/ck0txvaz6ub6h09447ux3qyyt_jpeg.rf.f1da8b4ec7111288ecbe6073fceb237b.jpg  
 extracting: valid/ck0kd70pu4w7b07946742zfpg_jpeg.rf.f97de37e1dbe037784b68b486dd00bd1.jpg  
 extracting: valid/ck0ukkr2f8vml0848uyh1iw50_jpeg.rf.e53767d593b25c6183c7b055d72f3347.jpg  
 extracting: valid/ck0u0ihp7sq1z086335c6h592_jpeg.rf.c5a6d0dba89fa1bb49d1c2657c990b54.jpg  
 extracting: valid/ck0t5oyi1mlpd08385ihi4js3_jpeg.rf.f45f627c6dd49db9e3f7e641f5dacefe.jpg  
 extracting: valid/ck0t74gx40b5u0a46s72no2dg_jpeg.rf.ffac374210a3586ba32f577104c18199.jpg  
 extracting: valid/ck0nfxm5e5gq50863tqwxrgft_jpeg.rf.fd2b84f48cd11ef2c3918eb0ca14393a.jpg  
 extracting: valid/ck0nftcfgka7e0848a9ie2b3b_jpeg.rf.fd3c2b42164e7e79bda1b4560b3298d1.jpg  
 extracting: valid/ck0twi6d05v8f0a469q54h3iq_jpeg.rf.fa918cf140bf7ac8183a8bbfb2b38454.jpg  
 extracting: valid/ck0t7czov10bf08489elv37go_jpeg.rf.ce6d0ff31f0071546a96ea9d1542264a.jpg  
 extracting: valid/ck0kdgpnj8gvt0701oaod540q_jpeg.rf.f49fbeba5eb3b767fdedfab317839752.jpg  
 extracting: valid/ck0rqzwam6rx20944o5eby5wd_jpeg.rf.e7cf8194969d8907e257abf0aa4a1a81.jpg  
 extracting: test/_annotations.coco.json  
 extracting: train/_annotations.coco.json  
 extracting: valid/_annotations.coco.json  
 extracting: README.roboflow.txt     
In [ ]:
import matplotlib
import matplotlib.pyplot as plt

import io
import scipy.misc
import numpy as np
from six import BytesIO
from PIL import Image, ImageDraw, ImageFont

import tensorflow as tf

from object_detection.utils import label_map_util
from object_detection.utils import config_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.builders import model_builder

%matplotlib inline
In [ ]:
def load_image_into_numpy_array(path):
  """Load an image from file into a numpy array.

  Puts image into numpy array to feed into tensorflow graph.
  Note that by convention we put it into a numpy array with shape
  (height, width, channels), where channels=3 for RGB.

  Args:
    path: the file path to the image

  Returns:
    uint8 numpy array with shape (img_height, img_width, 3)
  """
  img_data = tf.io.gfile.GFile(path, 'rb').read()
  image = Image.open(BytesIO(img_data))
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)
In [ ]:
%ls './training/'
checkpoint                   ckpt-6.index
ckpt-10.data-00000-of-00002  ckpt-7.data-00000-of-00002
ckpt-10.data-00001-of-00002  ckpt-7.data-00001-of-00002
ckpt-10.index                ckpt-7.index
ckpt-11.data-00000-of-00002  ckpt-8.data-00000-of-00002
ckpt-11.data-00001-of-00002  ckpt-8.data-00001-of-00002
ckpt-11.index                ckpt-8.index
ckpt-5.data-00000-of-00002   ckpt-9.data-00000-of-00002
ckpt-5.data-00001-of-00002   ckpt-9.data-00001-of-00002
ckpt-5.index                 ckpt-9.index
ckpt-6.data-00000-of-00002   train/
ckpt-6.data-00001-of-00002
In [ ]:
#recover our saved model
pipeline_config = pipeline_file
#generally you want to put the last ckpt from training in here
model_dir = './training/ckpt-9'
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
model_config = configs['model']
detection_model = model_builder.build(
      model_config=model_config, is_training=False)

# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(
      model=detection_model)
ckpt.restore(os.path.join('./training/ckpt-9'))


def get_model_detection_function(model):
  """Get a tf.function for detection."""

  @tf.function
  def detect_fn(image):
    """Detect objects in image."""

    image, shapes = model.preprocess(image)
    prediction_dict = model.predict(image, shapes)
    detections = model.postprocess(prediction_dict, shapes)

    return detections, prediction_dict, tf.reshape(shapes, [-1])

  return detect_fn

detect_fn = get_model_detection_function(detection_model)
In [ ]:
#map labels for inference decoding
label_map_path = configs['eval_input_config'].label_map_path
label_map = label_map_util.load_labelmap(label_map_path)
categories = label_map_util.convert_label_map_to_categories(
    label_map,
    max_num_classes=label_map_util.get_max_label_map_index(label_map),
    use_display_name=True)
category_index = label_map_util.create_category_index(categories)
label_map_dict = label_map_util.get_label_map_dict(label_map, use_display_name=True)

The rest of the notebook is inference on new test images of smoke. The testing has been performed on new images along with trying different modifications such as flipping the image horizontally, using grayscale images to infer. Feel free to try your own images! Before executing below cells, make sure to use the correct path of the test images from my GitHub

True Positives

In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()

True Negatives

In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()

False Negative

In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
image_np = np.tile(
    np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
image_np = np.tile(
    np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
image_np = np.tile(
    np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()

False Negative

In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
image_np = np.tile(
    np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()

Tired of scrolling? Go all the way down for a GIF inference!

In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
image_np = np.tile(
    np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
image_np = np.tile(
    np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
image_np = np.tile(
    np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
image_np = np.tile(
    np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test/*.jpg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
image_np = np.tile(
    np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
from google.colab import drive
drive.mount('/gdrive')
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly&response_type=code

Enter your authorization code:
··········
Mounted at /gdrive

True Negatives

In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test_smoke/*.jpeg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test_smoke/*.jpeg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test_smoke/*.jpeg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test_smoke/*.jpeg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
In [ ]:
#run detector on test image
#it takes a little longer on the first run and then runs at normal speed. 
import random

TEST_IMAGE_PATHS = glob.glob('./test/test_smoke/*.jpeg')
image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)

# Things to try:
# Flip horizontally
# image_np = np.fliplr(image_np).copy()

# Convert image to grayscale
# image_np = np.tile(
#     np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)

input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      min_score_thresh=.5,
      agnostic_mode=False,
)

plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()

Using a gif to detect smoke in real time

In [ ]:
test_image_dir = './test/test_smoke'
test_images_np = []
for i in range(1, 48):
  image_path = os.path.join(test_image_dir, 'frame (' + str(i) + ')' +'.jpeg')
  test_images_np.append(np.expand_dims(
      load_image_into_numpy_array(image_path), axis=0))

# Again, uncomment this decorator if you want to run inference eagerly
@tf.function
def detect(input_tensor):
  """Run detection on an input image.

  Args:
    input_tensor: A [1, height, width, 3] Tensor of type tf.float32.
      Note that height and width can be anything since the image will be
      immediately resized according to the needs of the model within this
      function.

  Returns:
    A dict containing 3 Tensors (`detection_boxes`, `detection_classes`,
      and `detection_scores`).
  """
  preprocessed_image, shapes = detection_model.preprocess(input_tensor)
  prediction_dict = detection_model.predict(preprocessed_image, shapes)
  
  return detection_model.postprocess(prediction_dict, shapes)

# Note that the first frame will trigger tracing of the tf.function, which will
# take some time, after which inference should be fast.

label_id_offset = 1
for i in range(len(test_images_np)):
  input_tensor = tf.convert_to_tensor(test_images_np[i], dtype=tf.float32)
  detections = detect(input_tensor)

  plot_detections(
      test_images_np[i][0],
      detections['detection_boxes'][0].numpy(),
      detections['detection_classes'][0].numpy().astype(np.uint32)
      + label_id_offset,
      detections['detection_scores'][0].numpy(),
      category_index, figsize=(15, 20), image_name="gif_frame_" + ('%02d' % i) + ".jpeg")
In [ ]:
imageio.plugins.freeimage.download()

anim_file = 'smoke_test.gif'

filenames = glob.glob('gif_frame_*.jpeg')
filenames = sorted(filenames)
last = -1
images = []
for filename in filenames:
  image = imageio.imread(filename)
  images.append(image)

imageio.mimsave(anim_file, images, 'GIF-FI', fps=5)

display(IPyImage(open(anim_file, 'rb').read()))

Congrats!

Hope you enjoyed this! This is your playground now.

Next Steps

  • Try out different model architectures and scale your model
  • Try to reduce the loss further and improve upon accuracy
  • Test your model on varying set of smoke images
  • Build an app using TFLite and deploy the application